00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2377 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3638 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.122 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.123 The recommended git tool is: git 00:00:00.123 using credential 00000000-0000-0000-0000-000000000002 00:00:00.125 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.942 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.953 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.963 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.963 > git config core.sparsecheckout # timeout=10 00:00:04.972 > git read-tree -mu HEAD # timeout=10 00:00:04.985 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.000 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.000 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.097 [Pipeline] Start of Pipeline 00:00:05.110 [Pipeline] library 00:00:05.111 Loading library shm_lib@master 00:00:05.111 Library shm_lib@master is cached. Copying from home. 00:00:05.125 [Pipeline] node 00:00:05.138 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.139 [Pipeline] { 00:00:05.149 [Pipeline] catchError 00:00:05.151 [Pipeline] { 00:00:05.164 [Pipeline] wrap 00:00:05.176 [Pipeline] { 00:00:05.185 [Pipeline] stage 00:00:05.187 [Pipeline] { (Prologue) 00:00:05.204 [Pipeline] echo 00:00:05.206 Node: VM-host-SM9 00:00:05.215 [Pipeline] cleanWs 00:00:05.224 [WS-CLEANUP] Deleting project workspace... 00:00:05.224 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.230 [WS-CLEANUP] done 00:00:05.423 [Pipeline] setCustomBuildProperty 00:00:05.491 [Pipeline] httpRequest 00:00:06.527 [Pipeline] echo 00:00:06.529 Sorcerer 10.211.164.20 is alive 00:00:06.537 [Pipeline] retry 00:00:06.539 [Pipeline] { 00:00:06.552 [Pipeline] httpRequest 00:00:06.555 HttpMethod: GET 00:00:06.556 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.556 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.558 Response Code: HTTP/1.1 200 OK 00:00:06.558 Success: Status code 200 is in the accepted range: 200,404 00:00:06.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.264 [Pipeline] } 00:00:07.280 [Pipeline] // retry 00:00:07.287 [Pipeline] sh 00:00:07.567 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.580 [Pipeline] httpRequest 00:00:07.954 [Pipeline] echo 00:00:07.957 Sorcerer 10.211.164.20 is alive 00:00:07.964 [Pipeline] retry 00:00:07.966 [Pipeline] { 00:00:07.979 [Pipeline] httpRequest 00:00:07.984 HttpMethod: GET 00:00:07.984 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.985 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.998 Response Code: HTTP/1.1 200 OK 00:00:07.998 Success: Status code 200 is in the accepted range: 200,404 00:00:07.999 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:18.842 [Pipeline] } 00:01:18.859 [Pipeline] // retry 00:01:18.866 [Pipeline] sh 00:01:19.148 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:21.699 [Pipeline] sh 00:01:21.976 + git -C spdk log --oneline -n5 00:01:21.976 c13c99a5e test: Various fixes for Fedora40 00:01:21.976 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:21.976 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:21.976 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:21.976 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:21.992 [Pipeline] writeFile 00:01:22.006 [Pipeline] sh 00:01:22.283 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:22.293 [Pipeline] sh 00:01:22.570 + cat autorun-spdk.conf 00:01:22.570 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.570 SPDK_TEST_NVMF=1 00:01:22.570 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.570 SPDK_TEST_URING=1 00:01:22.570 SPDK_TEST_VFIOUSER=1 00:01:22.570 SPDK_TEST_USDT=1 00:01:22.570 SPDK_RUN_UBSAN=1 00:01:22.570 NET_TYPE=virt 00:01:22.570 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.575 RUN_NIGHTLY=1 00:01:22.577 [Pipeline] } 00:01:22.589 [Pipeline] // stage 00:01:22.604 [Pipeline] stage 00:01:22.606 [Pipeline] { (Run VM) 00:01:22.618 [Pipeline] sh 00:01:22.895 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.895 + echo 'Start stage prepare_nvme.sh' 00:01:22.895 Start stage prepare_nvme.sh 00:01:22.895 + [[ -n 2 ]] 00:01:22.895 + disk_prefix=ex2 00:01:22.895 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:22.895 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:22.895 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:22.895 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.895 ++ SPDK_TEST_NVMF=1 00:01:22.895 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.895 ++ SPDK_TEST_URING=1 00:01:22.895 ++ SPDK_TEST_VFIOUSER=1 00:01:22.895 ++ SPDK_TEST_USDT=1 00:01:22.895 ++ SPDK_RUN_UBSAN=1 00:01:22.895 ++ NET_TYPE=virt 00:01:22.895 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.895 ++ RUN_NIGHTLY=1 00:01:22.895 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:22.895 + nvme_files=() 00:01:22.895 + declare -A nvme_files 00:01:22.895 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.895 + nvme_files['nvme.img']=5G 00:01:22.895 + nvme_files['nvme-cmb.img']=5G 00:01:22.895 + nvme_files['nvme-multi0.img']=4G 00:01:22.895 + nvme_files['nvme-multi1.img']=4G 00:01:22.895 + nvme_files['nvme-multi2.img']=4G 00:01:22.895 + nvme_files['nvme-openstack.img']=8G 00:01:22.895 + nvme_files['nvme-zns.img']=5G 00:01:22.895 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.895 + (( SPDK_TEST_FTL == 1 )) 00:01:22.895 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.895 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.895 + for nvme in "${!nvme_files[@]}" 00:01:22.895 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:22.895 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.895 + for nvme in "${!nvme_files[@]}" 00:01:22.895 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:22.895 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.895 + for nvme in "${!nvme_files[@]}" 00:01:22.895 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:23.154 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.154 + for nvme in "${!nvme_files[@]}" 00:01:23.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:23.154 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.154 + for nvme in "${!nvme_files[@]}" 00:01:23.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:23.154 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.154 + for nvme in "${!nvme_files[@]}" 00:01:23.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:23.413 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.413 + for nvme in "${!nvme_files[@]}" 00:01:23.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:23.413 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.413 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:23.671 + echo 'End stage prepare_nvme.sh' 00:01:23.671 End stage prepare_nvme.sh 00:01:23.683 [Pipeline] sh 00:01:23.964 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.964 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:23.964 00:01:23.964 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:23.964 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:23.964 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:23.964 HELP=0 00:01:23.964 DRY_RUN=0 00:01:23.964 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:23.964 NVME_DISKS_TYPE=nvme,nvme, 00:01:23.964 NVME_AUTO_CREATE=0 00:01:23.964 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:23.964 NVME_CMB=,, 00:01:23.964 NVME_PMR=,, 00:01:23.964 NVME_ZNS=,, 00:01:23.964 NVME_MS=,, 00:01:23.964 NVME_FDP=,, 00:01:23.964 SPDK_VAGRANT_DISTRO=fedora39 00:01:23.964 SPDK_VAGRANT_VMCPU=10 00:01:23.964 SPDK_VAGRANT_VMRAM=12288 00:01:23.964 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.964 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:23.964 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.964 SPDK_OPENSTACK_NETWORK=0 00:01:23.964 VAGRANT_PACKAGE_BOX=0 00:01:23.964 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.964 FORCE_DISTRO=true 00:01:23.964 VAGRANT_BOX_VERSION= 00:01:23.964 EXTRA_VAGRANTFILES= 00:01:23.964 NIC_MODEL=e1000 00:01:23.964 00:01:23.964 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:23.964 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:27.249 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.518 ==> default: Creating image (snapshot of base box volume). 00:01:27.518 ==> default: Creating domain with the following settings... 00:01:27.518 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731833524_62fba34d08b2c562234d 00:01:27.518 ==> default: -- Domain type: kvm 00:01:27.518 ==> default: -- Cpus: 10 00:01:27.518 ==> default: -- Feature: acpi 00:01:27.518 ==> default: -- Feature: apic 00:01:27.518 ==> default: -- Feature: pae 00:01:27.518 ==> default: -- Memory: 12288M 00:01:27.518 ==> default: -- Memory Backing: hugepages: 00:01:27.518 ==> default: -- Management MAC: 00:01:27.518 ==> default: -- Loader: 00:01:27.518 ==> default: -- Nvram: 00:01:27.518 ==> default: -- Base box: spdk/fedora39 00:01:27.518 ==> default: -- Storage pool: default 00:01:27.518 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731833524_62fba34d08b2c562234d.img (20G) 00:01:27.518 ==> default: -- Volume Cache: default 00:01:27.518 ==> default: -- Kernel: 00:01:27.518 ==> default: -- Initrd: 00:01:27.518 ==> default: -- Graphics Type: vnc 00:01:27.518 ==> default: -- Graphics Port: -1 00:01:27.518 ==> default: -- Graphics IP: 127.0.0.1 00:01:27.518 ==> default: -- Graphics Password: Not defined 00:01:27.518 ==> default: -- Video Type: cirrus 00:01:27.518 ==> default: -- Video VRAM: 9216 00:01:27.518 ==> default: -- Sound Type: 00:01:27.518 ==> default: -- Keymap: en-us 00:01:27.518 ==> default: -- TPM Path: 00:01:27.518 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:27.518 ==> default: -- Command line args: 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:27.518 ==> default: -> value=-drive, 00:01:27.518 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:27.518 ==> default: -> value=-drive, 00:01:27.518 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.518 ==> default: -> value=-drive, 00:01:27.518 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.518 ==> default: -> value=-drive, 00:01:27.518 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:27.518 ==> default: -> value=-device, 00:01:27.518 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.792 ==> default: Creating shared folders metadata... 00:01:27.792 ==> default: Starting domain. 00:01:28.729 ==> default: Waiting for domain to get an IP address... 00:01:46.814 ==> default: Waiting for SSH to become available... 00:01:46.814 ==> default: Configuring and enabling network interfaces... 00:01:49.348 default: SSH address: 192.168.121.39:22 00:01:49.348 default: SSH username: vagrant 00:01:49.348 default: SSH auth method: private key 00:01:51.895 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:00.039 ==> default: Mounting SSHFS shared folder... 00:02:00.976 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.976 ==> default: Checking Mount.. 00:02:02.354 ==> default: Folder Successfully Mounted! 00:02:02.354 ==> default: Running provisioner: file... 00:02:02.921 default: ~/.gitconfig => .gitconfig 00:02:03.489 00:02:03.489 SUCCESS! 00:02:03.489 00:02:03.489 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:03.489 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:03.489 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:03.489 00:02:03.498 [Pipeline] } 00:02:03.509 [Pipeline] // stage 00:02:03.519 [Pipeline] dir 00:02:03.519 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:03.521 [Pipeline] { 00:02:03.534 [Pipeline] catchError 00:02:03.536 [Pipeline] { 00:02:03.547 [Pipeline] sh 00:02:03.823 + vagrant ssh-config --host vagrant 00:02:03.823 + sed -ne /^Host/,$p 00:02:03.823 + tee ssh_conf 00:02:07.111 Host vagrant 00:02:07.111 HostName 192.168.121.39 00:02:07.111 User vagrant 00:02:07.111 Port 22 00:02:07.111 UserKnownHostsFile /dev/null 00:02:07.111 StrictHostKeyChecking no 00:02:07.111 PasswordAuthentication no 00:02:07.111 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.111 IdentitiesOnly yes 00:02:07.111 LogLevel FATAL 00:02:07.111 ForwardAgent yes 00:02:07.111 ForwardX11 yes 00:02:07.111 00:02:07.126 [Pipeline] withEnv 00:02:07.129 [Pipeline] { 00:02:07.142 [Pipeline] sh 00:02:07.423 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:07.423 source /etc/os-release 00:02:07.423 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.423 # Minimal, systemd-like check. 00:02:07.423 if [[ -e /.dockerenv ]]; then 00:02:07.423 # Clear garbage from the node's name: 00:02:07.423 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.423 # $HOSTNAME is the actual container id 00:02:07.423 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.423 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.423 # We can assume this is a mount from a host where container is running, 00:02:07.423 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.423 container="$(< /etc/hostname) ($agent)" 00:02:07.423 else 00:02:07.423 # Fallback 00:02:07.423 container=$agent 00:02:07.423 fi 00:02:07.423 fi 00:02:07.423 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.423 00:02:07.695 [Pipeline] } 00:02:07.714 [Pipeline] // withEnv 00:02:07.723 [Pipeline] setCustomBuildProperty 00:02:07.739 [Pipeline] stage 00:02:07.741 [Pipeline] { (Tests) 00:02:07.759 [Pipeline] sh 00:02:08.041 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:08.314 [Pipeline] sh 00:02:08.595 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:08.870 [Pipeline] timeout 00:02:08.870 Timeout set to expire in 1 hr 0 min 00:02:08.872 [Pipeline] { 00:02:08.885 [Pipeline] sh 00:02:09.163 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:09.731 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:09.743 [Pipeline] sh 00:02:10.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:10.295 [Pipeline] sh 00:02:10.576 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:10.849 [Pipeline] sh 00:02:11.129 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:11.389 ++ readlink -f spdk_repo 00:02:11.389 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:11.389 + [[ -n /home/vagrant/spdk_repo ]] 00:02:11.389 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:11.389 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:11.389 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:11.389 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:11.389 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:11.389 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:11.389 + cd /home/vagrant/spdk_repo 00:02:11.389 + source /etc/os-release 00:02:11.389 ++ NAME='Fedora Linux' 00:02:11.389 ++ VERSION='39 (Cloud Edition)' 00:02:11.389 ++ ID=fedora 00:02:11.389 ++ VERSION_ID=39 00:02:11.389 ++ VERSION_CODENAME= 00:02:11.389 ++ PLATFORM_ID=platform:f39 00:02:11.389 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.389 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.389 ++ LOGO=fedora-logo-icon 00:02:11.389 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.389 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.389 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.389 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.389 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.389 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.389 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.389 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.389 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.389 ++ SUPPORT_END=2024-11-12 00:02:11.389 ++ VARIANT='Cloud Edition' 00:02:11.389 ++ VARIANT_ID=cloud 00:02:11.389 + uname -a 00:02:11.389 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.389 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.389 Hugepages 00:02:11.389 node hugesize free / total 00:02:11.389 node0 1048576kB 0 / 0 00:02:11.389 node0 2048kB 0 / 0 00:02:11.389 00:02:11.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.389 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.389 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.389 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:11.389 + rm -f /tmp/spdk-ld-path 00:02:11.389 + source autorun-spdk.conf 00:02:11.389 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.389 ++ SPDK_TEST_NVMF=1 00:02:11.389 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.389 ++ SPDK_TEST_URING=1 00:02:11.389 ++ SPDK_TEST_VFIOUSER=1 00:02:11.389 ++ SPDK_TEST_USDT=1 00:02:11.389 ++ SPDK_RUN_UBSAN=1 00:02:11.389 ++ NET_TYPE=virt 00:02:11.389 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.389 ++ RUN_NIGHTLY=1 00:02:11.389 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.389 + [[ -n '' ]] 00:02:11.389 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:11.648 + for M in /var/spdk/build-*-manifest.txt 00:02:11.648 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:11.648 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.648 + for M in /var/spdk/build-*-manifest.txt 00:02:11.648 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.648 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.648 + for M in /var/spdk/build-*-manifest.txt 00:02:11.648 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.648 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.648 ++ uname 00:02:11.648 + [[ Linux == \L\i\n\u\x ]] 00:02:11.648 + sudo dmesg -T 00:02:11.648 + sudo dmesg --clear 00:02:11.648 + dmesg_pid=5238 00:02:11.648 + [[ Fedora Linux == FreeBSD ]] 00:02:11.648 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.648 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.648 + sudo dmesg -Tw 00:02:11.648 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.648 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.648 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.648 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.648 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.648 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.648 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.648 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.648 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.648 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.648 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.648 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.648 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.648 Test configuration: 00:02:11.648 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.648 SPDK_TEST_NVMF=1 00:02:11.648 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.648 SPDK_TEST_URING=1 00:02:11.648 SPDK_TEST_VFIOUSER=1 00:02:11.648 SPDK_TEST_USDT=1 00:02:11.648 SPDK_RUN_UBSAN=1 00:02:11.648 NET_TYPE=virt 00:02:11.648 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.648 RUN_NIGHTLY=1 08:52:48 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:11.648 08:52:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.648 08:52:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.648 08:52:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.648 08:52:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.648 08:52:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.648 08:52:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.648 08:52:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.648 08:52:48 -- paths/export.sh@5 -- $ export PATH 00:02:11.648 08:52:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.648 08:52:48 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.648 08:52:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:11.648 08:52:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731833568.XXXXXX 00:02:11.648 08:52:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731833568.ka5Dgv 00:02:11.648 08:52:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:11.648 08:52:48 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:11.648 08:52:48 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:11.648 08:52:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.648 08:52:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.648 08:52:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:11.648 08:52:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:11.648 08:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.648 08:52:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:11.648 08:52:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.648 08:52:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.648 08:52:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.648 08:52:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.648 Sun Nov 17 08:52:48 AM UTC 2024 00:02:11.648 08:52:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.648 LTS-67-gc13c99a5e 00:02:11.648 08:52:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.648 08:52:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.648 08:52:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.648 08:52:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:11.648 08:52:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:11.648 08:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.648 ************************************ 00:02:11.648 START TEST ubsan 00:02:11.648 ************************************ 00:02:11.648 using ubsan 00:02:11.648 08:52:48 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:11.648 00:02:11.648 real 0m0.000s 00:02:11.648 user 0m0.000s 00:02:11.648 sys 0m0.000s 00:02:11.648 08:52:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:11.648 08:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.648 ************************************ 00:02:11.648 END TEST ubsan 00:02:11.648 ************************************ 00:02:11.907 08:52:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.907 08:52:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.907 08:52:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.907 08:52:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:11.907 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.907 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.479 Using 'verbs' RDMA provider 00:02:25.253 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:40.203 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:40.203 Creating mk/config.mk...done. 00:02:40.203 Creating mk/cc.flags.mk...done. 00:02:40.203 Type 'make' to build. 00:02:40.203 08:53:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:40.203 08:53:14 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:40.203 08:53:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:40.203 08:53:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.203 ************************************ 00:02:40.203 START TEST make 00:02:40.203 ************************************ 00:02:40.203 08:53:14 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:40.203 make[1]: Nothing to be done for 'all'. 00:02:40.203 The Meson build system 00:02:40.203 Version: 1.5.0 00:02:40.203 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:40.203 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:40.203 Build type: native build 00:02:40.203 Project name: libvfio-user 00:02:40.203 Project version: 0.0.1 00:02:40.203 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.203 C linker for the host machine: cc ld.bfd 2.40-14 00:02:40.203 Host machine cpu family: x86_64 00:02:40.203 Host machine cpu: x86_64 00:02:40.203 Run-time dependency threads found: YES 00:02:40.203 Library dl found: YES 00:02:40.203 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.203 Run-time dependency json-c found: YES 0.17 00:02:40.203 Run-time dependency cmocka found: YES 1.1.7 00:02:40.203 Program pytest-3 found: NO 00:02:40.203 Program flake8 found: NO 00:02:40.203 Program misspell-fixer found: NO 00:02:40.203 Program restructuredtext-lint found: NO 00:02:40.203 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.203 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.203 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.203 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.203 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.203 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.203 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.203 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.203 Build targets in project: 8 00:02:40.203 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.203 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.203 00:02:40.203 libvfio-user 0.0.1 00:02:40.203 00:02:40.203 User defined options 00:02:40.203 buildtype : debug 00:02:40.203 default_library: shared 00:02:40.203 libdir : /usr/local/lib 00:02:40.203 00:02:40.203 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.203 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:40.462 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:40.462 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:40.462 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:40.462 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:40.462 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:40.462 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:40.462 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:40.462 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:40.462 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:40.462 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:40.462 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:40.462 [12/37] Compiling C object samples/null.p/null.c.o 00:02:40.462 [13/37] Compiling C object samples/client.p/client.c.o 00:02:40.462 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:40.462 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:40.462 [16/37] Linking target samples/client 00:02:40.462 [17/37] Compiling C object samples/server.p/server.c.o 00:02:40.720 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:40.720 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:40.720 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:40.720 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:40.720 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:40.720 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:40.720 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:40.720 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:40.720 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:02:40.720 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:40.720 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:40.720 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:40.720 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:40.979 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:40.979 [32/37] Linking target test/unit_tests 00:02:40.979 [33/37] Linking target samples/lspci 00:02:40.979 [34/37] Linking target samples/server 00:02:40.979 [35/37] Linking target samples/gpio-pci-idio-16 00:02:40.979 [36/37] Linking target samples/null 00:02:40.979 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:40.979 INFO: autodetecting backend as ninja 00:02:40.979 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:40.979 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:41.546 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:41.546 ninja: no work to do. 00:02:49.676 The Meson build system 00:02:49.676 Version: 1.5.0 00:02:49.676 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:49.676 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:49.676 Build type: native build 00:02:49.676 Program cat found: YES (/usr/bin/cat) 00:02:49.676 Project name: DPDK 00:02:49.676 Project version: 23.11.0 00:02:49.676 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.676 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.676 Host machine cpu family: x86_64 00:02:49.676 Host machine cpu: x86_64 00:02:49.676 Message: ## Building in Developer Mode ## 00:02:49.676 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.676 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.676 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.676 Program python3 found: YES (/usr/bin/python3) 00:02:49.676 Program cat found: YES (/usr/bin/cat) 00:02:49.676 Compiler for C supports arguments -march=native: YES 00:02:49.676 Checking for size of "void *" : 8 00:02:49.676 Checking for size of "void *" : 8 (cached) 00:02:49.676 Library m found: YES 00:02:49.676 Library numa found: YES 00:02:49.676 Has header "numaif.h" : YES 00:02:49.676 Library fdt found: NO 00:02:49.676 Library execinfo found: NO 00:02:49.676 Has header "execinfo.h" : YES 00:02:49.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.676 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.676 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.676 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.676 Run-time dependency openssl found: YES 3.1.1 00:02:49.676 Run-time dependency libpcap found: YES 1.10.4 00:02:49.676 Has header "pcap.h" with dependency libpcap: YES 00:02:49.676 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.676 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.676 Compiler for C supports arguments -Wformat: YES 00:02:49.676 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.676 Compiler for C supports arguments -Wformat-security: NO 00:02:49.676 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.676 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.676 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.676 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.676 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.676 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.676 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.676 Compiler for C supports arguments -Wundef: YES 00:02:49.676 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.676 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.676 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.676 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.676 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.676 Program objdump found: YES (/usr/bin/objdump) 00:02:49.676 Compiler for C supports arguments -mavx512f: YES 00:02:49.676 Checking if "AVX512 checking" compiles: YES 00:02:49.676 Fetching value of define "__SSE4_2__" : 1 00:02:49.676 Fetching value of define "__AES__" : 1 00:02:49.676 Fetching value of define "__AVX__" : 1 00:02:49.676 Fetching value of define "__AVX2__" : 1 00:02:49.676 Fetching value of define "__AVX512BW__" : (undefined) 00:02:49.676 Fetching value of define "__AVX512CD__" : (undefined) 00:02:49.676 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:49.676 Fetching value of define "__AVX512F__" : (undefined) 00:02:49.676 Fetching value of define "__AVX512VL__" : (undefined) 00:02:49.676 Fetching value of define "__PCLMUL__" : 1 00:02:49.676 Fetching value of define "__RDRND__" : 1 00:02:49.676 Fetching value of define "__RDSEED__" : 1 00:02:49.676 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:49.676 Fetching value of define "__znver1__" : (undefined) 00:02:49.676 Fetching value of define "__znver2__" : (undefined) 00:02:49.676 Fetching value of define "__znver3__" : (undefined) 00:02:49.676 Fetching value of define "__znver4__" : (undefined) 00:02:49.676 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.676 Message: lib/log: Defining dependency "log" 00:02:49.676 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.676 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.676 Checking for function "getentropy" : NO 00:02:49.676 Message: lib/eal: Defining dependency "eal" 00:02:49.676 Message: lib/ring: Defining dependency "ring" 00:02:49.676 Message: lib/rcu: Defining dependency "rcu" 00:02:49.676 Message: lib/mempool: Defining dependency "mempool" 00:02:49.676 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.676 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.676 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:49.676 Compiler for C supports arguments -mpclmul: YES 00:02:49.676 Compiler for C supports arguments -maes: YES 00:02:49.676 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.676 Compiler for C supports arguments -mavx512bw: YES 00:02:49.676 Compiler for C supports arguments -mavx512dq: YES 00:02:49.676 Compiler for C supports arguments -mavx512vl: YES 00:02:49.676 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.676 Compiler for C supports arguments -mavx2: YES 00:02:49.676 Compiler for C supports arguments -mavx: YES 00:02:49.676 Message: lib/net: Defining dependency "net" 00:02:49.676 Message: lib/meter: Defining dependency "meter" 00:02:49.676 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.676 Message: lib/pci: Defining dependency "pci" 00:02:49.676 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.676 Message: lib/hash: Defining dependency "hash" 00:02:49.676 Message: lib/timer: Defining dependency "timer" 00:02:49.676 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.676 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.676 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.676 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.676 Message: lib/power: Defining dependency "power" 00:02:49.676 Message: lib/reorder: Defining dependency "reorder" 00:02:49.676 Message: lib/security: Defining dependency "security" 00:02:49.676 Has header "linux/userfaultfd.h" : YES 00:02:49.676 Has header "linux/vduse.h" : YES 00:02:49.676 Message: lib/vhost: Defining dependency "vhost" 00:02:49.677 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.677 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.677 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.677 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.677 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.677 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.677 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.677 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.677 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.677 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.677 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.677 Configuring doxy-api-html.conf using configuration 00:02:49.677 Configuring doxy-api-man.conf using configuration 00:02:49.677 Program mandb found: YES (/usr/bin/mandb) 00:02:49.677 Program sphinx-build found: NO 00:02:49.677 Configuring rte_build_config.h using configuration 00:02:49.677 Message: 00:02:49.677 ================= 00:02:49.677 Applications Enabled 00:02:49.677 ================= 00:02:49.677 00:02:49.677 apps: 00:02:49.677 00:02:49.677 00:02:49.677 Message: 00:02:49.677 ================= 00:02:49.677 Libraries Enabled 00:02:49.677 ================= 00:02:49.677 00:02:49.677 libs: 00:02:49.677 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.677 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.677 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.677 00:02:49.677 Message: 00:02:49.677 =============== 00:02:49.677 Drivers Enabled 00:02:49.677 =============== 00:02:49.677 00:02:49.677 common: 00:02:49.677 00:02:49.677 bus: 00:02:49.677 pci, vdev, 00:02:49.677 mempool: 00:02:49.677 ring, 00:02:49.677 dma: 00:02:49.677 00:02:49.677 net: 00:02:49.677 00:02:49.677 crypto: 00:02:49.677 00:02:49.677 compress: 00:02:49.677 00:02:49.677 vdpa: 00:02:49.677 00:02:49.677 00:02:49.677 Message: 00:02:49.677 ================= 00:02:49.677 Content Skipped 00:02:49.677 ================= 00:02:49.677 00:02:49.677 apps: 00:02:49.677 dumpcap: explicitly disabled via build config 00:02:49.677 graph: explicitly disabled via build config 00:02:49.677 pdump: explicitly disabled via build config 00:02:49.677 proc-info: explicitly disabled via build config 00:02:49.677 test-acl: explicitly disabled via build config 00:02:49.677 test-bbdev: explicitly disabled via build config 00:02:49.677 test-cmdline: explicitly disabled via build config 00:02:49.677 test-compress-perf: explicitly disabled via build config 00:02:49.677 test-crypto-perf: explicitly disabled via build config 00:02:49.677 test-dma-perf: explicitly disabled via build config 00:02:49.677 test-eventdev: explicitly disabled via build config 00:02:49.677 test-fib: explicitly disabled via build config 00:02:49.677 test-flow-perf: explicitly disabled via build config 00:02:49.677 test-gpudev: explicitly disabled via build config 00:02:49.677 test-mldev: explicitly disabled via build config 00:02:49.677 test-pipeline: explicitly disabled via build config 00:02:49.677 test-pmd: explicitly disabled via build config 00:02:49.677 test-regex: explicitly disabled via build config 00:02:49.677 test-sad: explicitly disabled via build config 00:02:49.677 test-security-perf: explicitly disabled via build config 00:02:49.677 00:02:49.677 libs: 00:02:49.677 metrics: explicitly disabled via build config 00:02:49.677 acl: explicitly disabled via build config 00:02:49.677 bbdev: explicitly disabled via build config 00:02:49.677 bitratestats: explicitly disabled via build config 00:02:49.677 bpf: explicitly disabled via build config 00:02:49.677 cfgfile: explicitly disabled via build config 00:02:49.677 distributor: explicitly disabled via build config 00:02:49.677 efd: explicitly disabled via build config 00:02:49.677 eventdev: explicitly disabled via build config 00:02:49.677 dispatcher: explicitly disabled via build config 00:02:49.677 gpudev: explicitly disabled via build config 00:02:49.677 gro: explicitly disabled via build config 00:02:49.677 gso: explicitly disabled via build config 00:02:49.677 ip_frag: explicitly disabled via build config 00:02:49.677 jobstats: explicitly disabled via build config 00:02:49.677 latencystats: explicitly disabled via build config 00:02:49.677 lpm: explicitly disabled via build config 00:02:49.677 member: explicitly disabled via build config 00:02:49.677 pcapng: explicitly disabled via build config 00:02:49.677 rawdev: explicitly disabled via build config 00:02:49.677 regexdev: explicitly disabled via build config 00:02:49.677 mldev: explicitly disabled via build config 00:02:49.677 rib: explicitly disabled via build config 00:02:49.677 sched: explicitly disabled via build config 00:02:49.677 stack: explicitly disabled via build config 00:02:49.677 ipsec: explicitly disabled via build config 00:02:49.677 pdcp: explicitly disabled via build config 00:02:49.677 fib: explicitly disabled via build config 00:02:49.677 port: explicitly disabled via build config 00:02:49.677 pdump: explicitly disabled via build config 00:02:49.677 table: explicitly disabled via build config 00:02:49.677 pipeline: explicitly disabled via build config 00:02:49.677 graph: explicitly disabled via build config 00:02:49.677 node: explicitly disabled via build config 00:02:49.677 00:02:49.677 drivers: 00:02:49.677 common/cpt: not in enabled drivers build config 00:02:49.677 common/dpaax: not in enabled drivers build config 00:02:49.677 common/iavf: not in enabled drivers build config 00:02:49.677 common/idpf: not in enabled drivers build config 00:02:49.677 common/mvep: not in enabled drivers build config 00:02:49.677 common/octeontx: not in enabled drivers build config 00:02:49.677 bus/auxiliary: not in enabled drivers build config 00:02:49.677 bus/cdx: not in enabled drivers build config 00:02:49.677 bus/dpaa: not in enabled drivers build config 00:02:49.677 bus/fslmc: not in enabled drivers build config 00:02:49.677 bus/ifpga: not in enabled drivers build config 00:02:49.677 bus/platform: not in enabled drivers build config 00:02:49.677 bus/vmbus: not in enabled drivers build config 00:02:49.677 common/cnxk: not in enabled drivers build config 00:02:49.677 common/mlx5: not in enabled drivers build config 00:02:49.677 common/nfp: not in enabled drivers build config 00:02:49.677 common/qat: not in enabled drivers build config 00:02:49.677 common/sfc_efx: not in enabled drivers build config 00:02:49.677 mempool/bucket: not in enabled drivers build config 00:02:49.677 mempool/cnxk: not in enabled drivers build config 00:02:49.677 mempool/dpaa: not in enabled drivers build config 00:02:49.677 mempool/dpaa2: not in enabled drivers build config 00:02:49.677 mempool/octeontx: not in enabled drivers build config 00:02:49.677 mempool/stack: not in enabled drivers build config 00:02:49.677 dma/cnxk: not in enabled drivers build config 00:02:49.677 dma/dpaa: not in enabled drivers build config 00:02:49.677 dma/dpaa2: not in enabled drivers build config 00:02:49.677 dma/hisilicon: not in enabled drivers build config 00:02:49.677 dma/idxd: not in enabled drivers build config 00:02:49.677 dma/ioat: not in enabled drivers build config 00:02:49.677 dma/skeleton: not in enabled drivers build config 00:02:49.677 net/af_packet: not in enabled drivers build config 00:02:49.677 net/af_xdp: not in enabled drivers build config 00:02:49.677 net/ark: not in enabled drivers build config 00:02:49.677 net/atlantic: not in enabled drivers build config 00:02:49.677 net/avp: not in enabled drivers build config 00:02:49.677 net/axgbe: not in enabled drivers build config 00:02:49.677 net/bnx2x: not in enabled drivers build config 00:02:49.677 net/bnxt: not in enabled drivers build config 00:02:49.677 net/bonding: not in enabled drivers build config 00:02:49.677 net/cnxk: not in enabled drivers build config 00:02:49.677 net/cpfl: not in enabled drivers build config 00:02:49.677 net/cxgbe: not in enabled drivers build config 00:02:49.677 net/dpaa: not in enabled drivers build config 00:02:49.677 net/dpaa2: not in enabled drivers build config 00:02:49.677 net/e1000: not in enabled drivers build config 00:02:49.677 net/ena: not in enabled drivers build config 00:02:49.677 net/enetc: not in enabled drivers build config 00:02:49.677 net/enetfec: not in enabled drivers build config 00:02:49.677 net/enic: not in enabled drivers build config 00:02:49.677 net/failsafe: not in enabled drivers build config 00:02:49.677 net/fm10k: not in enabled drivers build config 00:02:49.677 net/gve: not in enabled drivers build config 00:02:49.677 net/hinic: not in enabled drivers build config 00:02:49.677 net/hns3: not in enabled drivers build config 00:02:49.677 net/i40e: not in enabled drivers build config 00:02:49.677 net/iavf: not in enabled drivers build config 00:02:49.677 net/ice: not in enabled drivers build config 00:02:49.677 net/idpf: not in enabled drivers build config 00:02:49.677 net/igc: not in enabled drivers build config 00:02:49.677 net/ionic: not in enabled drivers build config 00:02:49.677 net/ipn3ke: not in enabled drivers build config 00:02:49.677 net/ixgbe: not in enabled drivers build config 00:02:49.677 net/mana: not in enabled drivers build config 00:02:49.677 net/memif: not in enabled drivers build config 00:02:49.677 net/mlx4: not in enabled drivers build config 00:02:49.677 net/mlx5: not in enabled drivers build config 00:02:49.677 net/mvneta: not in enabled drivers build config 00:02:49.677 net/mvpp2: not in enabled drivers build config 00:02:49.677 net/netvsc: not in enabled drivers build config 00:02:49.677 net/nfb: not in enabled drivers build config 00:02:49.677 net/nfp: not in enabled drivers build config 00:02:49.677 net/ngbe: not in enabled drivers build config 00:02:49.677 net/null: not in enabled drivers build config 00:02:49.677 net/octeontx: not in enabled drivers build config 00:02:49.677 net/octeon_ep: not in enabled drivers build config 00:02:49.677 net/pcap: not in enabled drivers build config 00:02:49.677 net/pfe: not in enabled drivers build config 00:02:49.678 net/qede: not in enabled drivers build config 00:02:49.678 net/ring: not in enabled drivers build config 00:02:49.678 net/sfc: not in enabled drivers build config 00:02:49.678 net/softnic: not in enabled drivers build config 00:02:49.678 net/tap: not in enabled drivers build config 00:02:49.678 net/thunderx: not in enabled drivers build config 00:02:49.678 net/txgbe: not in enabled drivers build config 00:02:49.678 net/vdev_netvsc: not in enabled drivers build config 00:02:49.678 net/vhost: not in enabled drivers build config 00:02:49.678 net/virtio: not in enabled drivers build config 00:02:49.678 net/vmxnet3: not in enabled drivers build config 00:02:49.678 raw/*: missing internal dependency, "rawdev" 00:02:49.678 crypto/armv8: not in enabled drivers build config 00:02:49.678 crypto/bcmfs: not in enabled drivers build config 00:02:49.678 crypto/caam_jr: not in enabled drivers build config 00:02:49.678 crypto/ccp: not in enabled drivers build config 00:02:49.678 crypto/cnxk: not in enabled drivers build config 00:02:49.678 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.678 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.678 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.678 crypto/mlx5: not in enabled drivers build config 00:02:49.678 crypto/mvsam: not in enabled drivers build config 00:02:49.678 crypto/nitrox: not in enabled drivers build config 00:02:49.678 crypto/null: not in enabled drivers build config 00:02:49.678 crypto/octeontx: not in enabled drivers build config 00:02:49.678 crypto/openssl: not in enabled drivers build config 00:02:49.678 crypto/scheduler: not in enabled drivers build config 00:02:49.678 crypto/uadk: not in enabled drivers build config 00:02:49.678 crypto/virtio: not in enabled drivers build config 00:02:49.678 compress/isal: not in enabled drivers build config 00:02:49.678 compress/mlx5: not in enabled drivers build config 00:02:49.678 compress/octeontx: not in enabled drivers build config 00:02:49.678 compress/zlib: not in enabled drivers build config 00:02:49.678 regex/*: missing internal dependency, "regexdev" 00:02:49.678 ml/*: missing internal dependency, "mldev" 00:02:49.678 vdpa/ifc: not in enabled drivers build config 00:02:49.678 vdpa/mlx5: not in enabled drivers build config 00:02:49.678 vdpa/nfp: not in enabled drivers build config 00:02:49.678 vdpa/sfc: not in enabled drivers build config 00:02:49.678 event/*: missing internal dependency, "eventdev" 00:02:49.678 baseband/*: missing internal dependency, "bbdev" 00:02:49.678 gpu/*: missing internal dependency, "gpudev" 00:02:49.678 00:02:49.678 00:02:49.958 Build targets in project: 85 00:02:49.958 00:02:49.958 DPDK 23.11.0 00:02:49.958 00:02:49.958 User defined options 00:02:49.958 buildtype : debug 00:02:49.958 default_library : shared 00:02:49.958 libdir : lib 00:02:49.958 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.958 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:49.958 c_link_args : 00:02:49.958 cpu_instruction_set: native 00:02:49.958 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.958 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.958 enable_docs : false 00:02:49.958 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:49.958 enable_kmods : false 00:02:49.958 tests : false 00:02:49.958 00:02:49.958 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.217 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.475 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.475 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.475 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.475 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.475 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.475 [6/265] Linking static target lib/librte_log.a 00:02:50.475 [7/265] Linking static target lib/librte_kvargs.a 00:02:50.475 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.475 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.475 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:51.043 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.043 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.301 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.301 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.301 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.301 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.301 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.301 [18/265] Linking static target lib/librte_telemetry.a 00:02:51.560 [19/265] Linking target lib/librte_log.so.24.0 00:02:51.560 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.560 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.560 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.560 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.818 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:51.818 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:51.818 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.076 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:52.076 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.076 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.076 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.336 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.336 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:52.336 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.336 [34/265] Linking target lib/librte_telemetry.so.24.0 00:02:52.336 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.594 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:52.594 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.594 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.594 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.852 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.852 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.852 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.852 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.852 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.852 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.111 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.111 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.111 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.369 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.369 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.628 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.887 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.887 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.887 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.887 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.887 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.887 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.887 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.887 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.146 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.146 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.146 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.146 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.404 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.663 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.663 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.922 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.922 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.922 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.922 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.922 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.922 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:54.922 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.922 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.922 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.922 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.922 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.181 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.441 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.699 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.699 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.958 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.958 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.958 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.958 [85/265] Linking static target lib/librte_eal.a 00:02:55.958 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.958 [87/265] Linking static target lib/librte_ring.a 00:02:56.216 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.216 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.216 [90/265] Linking static target lib/librte_rcu.a 00:02:56.216 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.475 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.475 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.733 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.733 [95/265] Linking static target lib/librte_mempool.a 00:02:56.733 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.733 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.733 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.992 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.992 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.992 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.250 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.250 [103/265] Linking static target lib/librte_mbuf.a 00:02:57.509 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.509 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.509 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.509 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.768 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.768 [109/265] Linking static target lib/librte_net.a 00:02:57.768 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.768 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.768 [112/265] Linking static target lib/librte_meter.a 00:02:57.768 [113/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.026 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.026 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.026 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.284 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.284 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.543 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.543 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.801 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.801 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.060 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.060 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.060 [125/265] Linking static target lib/librte_pci.a 00:02:59.060 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.318 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.318 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.318 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.318 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.576 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.576 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.576 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.576 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.576 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.576 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.576 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.576 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.576 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.576 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.576 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.576 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.835 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.093 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.093 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.093 [146/265] Linking static target lib/librte_cmdline.a 00:03:00.093 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.093 [148/265] Linking static target lib/librte_ethdev.a 00:03:00.351 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.352 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.352 [151/265] Linking static target lib/librte_timer.a 00:03:00.352 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.352 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.610 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.868 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.868 [156/265] Linking static target lib/librte_compressdev.a 00:03:00.868 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.868 [158/265] Linking static target lib/librte_hash.a 00:03:00.868 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:01.126 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.126 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.126 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.385 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.385 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.385 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.385 [166/265] Linking static target lib/librte_dmadev.a 00:03:01.643 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.643 [168/265] Linking static target lib/librte_cryptodev.a 00:03:01.643 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.643 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.643 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.901 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.901 [173/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.901 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.159 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.159 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.419 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.419 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.419 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.419 [180/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.419 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.677 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.677 [183/265] Linking static target lib/librte_power.a 00:03:02.936 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.936 [185/265] Linking static target lib/librte_reorder.a 00:03:02.936 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.936 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.936 [188/265] Linking static target lib/librte_security.a 00:03:03.194 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.452 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.452 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.452 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.711 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.969 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.969 [195/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.969 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.969 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.248 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.248 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.526 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.526 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.526 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.526 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.784 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.784 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.784 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.784 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.784 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.784 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.043 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.043 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.043 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.043 [213/265] Linking static target drivers/librte_bus_pci.a 00:03:05.043 [214/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.043 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.043 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.043 [217/265] Linking static target drivers/librte_bus_vdev.a 00:03:05.043 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.043 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.301 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.301 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.302 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.302 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.302 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:05.302 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.237 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.495 [227/265] Linking static target lib/librte_vhost.a 00:03:07.061 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.061 [229/265] Linking target lib/librte_eal.so.24.0 00:03:07.320 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:07.320 [231/265] Linking target lib/librte_meter.so.24.0 00:03:07.320 [232/265] Linking target lib/librte_pci.so.24.0 00:03:07.320 [233/265] Linking target lib/librte_timer.so.24.0 00:03:07.320 [234/265] Linking target lib/librte_ring.so.24.0 00:03:07.320 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:07.320 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:07.320 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:07.320 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:07.320 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:07.320 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:07.320 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:07.320 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:07.320 [243/265] Linking target lib/librte_rcu.so.24.0 00:03:07.320 [244/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:07.578 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:07.578 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:07.578 [247/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.578 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:07.578 [249/265] Linking target lib/librte_mbuf.so.24.0 00:03:07.837 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:07.837 [251/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.837 [252/265] Linking target lib/librte_reorder.so.24.0 00:03:07.837 [253/265] Linking target lib/librte_net.so.24.0 00:03:07.837 [254/265] Linking target lib/librte_compressdev.so.24.0 00:03:07.837 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:03:08.096 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:08.096 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:08.096 [258/265] Linking target lib/librte_cmdline.so.24.0 00:03:08.096 [259/265] Linking target lib/librte_hash.so.24.0 00:03:08.096 [260/265] Linking target lib/librte_security.so.24.0 00:03:08.096 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:08.096 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:08.354 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:08.355 [264/265] Linking target lib/librte_power.so.24.0 00:03:08.355 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:08.355 INFO: autodetecting backend as ninja 00:03:08.355 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:09.731 CC lib/ut/ut.o 00:03:09.731 CC lib/ut_mock/mock.o 00:03:09.731 CC lib/log/log.o 00:03:09.731 CC lib/log/log_flags.o 00:03:09.731 CC lib/log/log_deprecated.o 00:03:09.731 LIB libspdk_ut_mock.a 00:03:09.731 SO libspdk_ut_mock.so.5.0 00:03:09.731 LIB libspdk_ut.a 00:03:09.731 SO libspdk_ut.so.1.0 00:03:09.731 SYMLINK libspdk_ut_mock.so 00:03:09.731 LIB libspdk_log.a 00:03:09.731 SYMLINK libspdk_ut.so 00:03:09.731 SO libspdk_log.so.6.1 00:03:09.990 SYMLINK libspdk_log.so 00:03:09.990 CC lib/dma/dma.o 00:03:09.990 CXX lib/trace_parser/trace.o 00:03:09.990 CC lib/util/base64.o 00:03:09.990 CC lib/util/bit_array.o 00:03:09.990 CC lib/util/cpuset.o 00:03:09.990 CC lib/util/crc16.o 00:03:09.990 CC lib/util/crc32.o 00:03:09.990 CC lib/util/crc32c.o 00:03:09.990 CC lib/ioat/ioat.o 00:03:09.990 CC lib/vfio_user/host/vfio_user_pci.o 00:03:10.249 CC lib/util/crc32_ieee.o 00:03:10.249 CC lib/util/crc64.o 00:03:10.249 CC lib/util/dif.o 00:03:10.249 CC lib/util/fd.o 00:03:10.249 LIB libspdk_dma.a 00:03:10.249 CC lib/vfio_user/host/vfio_user.o 00:03:10.249 SO libspdk_dma.so.3.0 00:03:10.249 CC lib/util/file.o 00:03:10.249 SYMLINK libspdk_dma.so 00:03:10.249 CC lib/util/hexlify.o 00:03:10.249 CC lib/util/iov.o 00:03:10.249 CC lib/util/math.o 00:03:10.508 CC lib/util/pipe.o 00:03:10.508 LIB libspdk_ioat.a 00:03:10.508 CC lib/util/strerror_tls.o 00:03:10.508 SO libspdk_ioat.so.6.0 00:03:10.508 CC lib/util/string.o 00:03:10.508 SYMLINK libspdk_ioat.so 00:03:10.508 CC lib/util/uuid.o 00:03:10.508 CC lib/util/fd_group.o 00:03:10.508 LIB libspdk_vfio_user.a 00:03:10.508 CC lib/util/xor.o 00:03:10.508 SO libspdk_vfio_user.so.4.0 00:03:10.508 CC lib/util/zipf.o 00:03:10.508 SYMLINK libspdk_vfio_user.so 00:03:10.766 LIB libspdk_util.a 00:03:11.025 SO libspdk_util.so.8.0 00:03:11.025 SYMLINK libspdk_util.so 00:03:11.025 LIB libspdk_trace_parser.a 00:03:11.025 SO libspdk_trace_parser.so.4.0 00:03:11.025 CC lib/conf/conf.o 00:03:11.025 CC lib/json/json_parse.o 00:03:11.025 CC lib/json/json_util.o 00:03:11.025 CC lib/rdma/common.o 00:03:11.025 CC lib/json/json_write.o 00:03:11.025 CC lib/rdma/rdma_verbs.o 00:03:11.025 CC lib/idxd/idxd.o 00:03:11.283 CC lib/vmd/vmd.o 00:03:11.283 CC lib/env_dpdk/env.o 00:03:11.283 SYMLINK libspdk_trace_parser.so 00:03:11.283 CC lib/vmd/led.o 00:03:11.283 CC lib/env_dpdk/memory.o 00:03:11.283 CC lib/env_dpdk/pci.o 00:03:11.283 LIB libspdk_conf.a 00:03:11.283 CC lib/env_dpdk/init.o 00:03:11.283 SO libspdk_conf.so.5.0 00:03:11.542 LIB libspdk_rdma.a 00:03:11.542 LIB libspdk_json.a 00:03:11.542 CC lib/env_dpdk/threads.o 00:03:11.542 SYMLINK libspdk_conf.so 00:03:11.542 CC lib/env_dpdk/pci_ioat.o 00:03:11.542 SO libspdk_rdma.so.5.0 00:03:11.542 SO libspdk_json.so.5.1 00:03:11.542 SYMLINK libspdk_rdma.so 00:03:11.542 SYMLINK libspdk_json.so 00:03:11.542 CC lib/env_dpdk/pci_virtio.o 00:03:11.542 CC lib/env_dpdk/pci_vmd.o 00:03:11.542 CC lib/env_dpdk/pci_idxd.o 00:03:11.542 CC lib/env_dpdk/pci_event.o 00:03:11.801 CC lib/env_dpdk/sigbus_handler.o 00:03:11.801 CC lib/idxd/idxd_user.o 00:03:11.801 CC lib/env_dpdk/pci_dpdk.o 00:03:11.801 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.801 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.801 CC lib/idxd/idxd_kernel.o 00:03:11.801 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.801 LIB libspdk_vmd.a 00:03:11.801 SO libspdk_vmd.so.5.0 00:03:11.801 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.059 LIB libspdk_idxd.a 00:03:12.059 SYMLINK libspdk_vmd.so 00:03:12.059 SO libspdk_idxd.so.11.0 00:03:12.059 SYMLINK libspdk_idxd.so 00:03:12.059 LIB libspdk_jsonrpc.a 00:03:12.059 SO libspdk_jsonrpc.so.5.1 00:03:12.317 SYMLINK libspdk_jsonrpc.so 00:03:12.317 CC lib/rpc/rpc.o 00:03:12.574 LIB libspdk_env_dpdk.a 00:03:12.574 LIB libspdk_rpc.a 00:03:12.574 SO libspdk_rpc.so.5.0 00:03:12.574 SYMLINK libspdk_rpc.so 00:03:12.574 SO libspdk_env_dpdk.so.13.0 00:03:12.832 CC lib/trace/trace.o 00:03:12.832 CC lib/trace/trace_flags.o 00:03:12.832 CC lib/trace/trace_rpc.o 00:03:12.832 CC lib/sock/sock.o 00:03:12.832 CC lib/sock/sock_rpc.o 00:03:12.832 CC lib/notify/notify.o 00:03:12.833 CC lib/notify/notify_rpc.o 00:03:12.833 SYMLINK libspdk_env_dpdk.so 00:03:13.091 LIB libspdk_notify.a 00:03:13.091 SO libspdk_notify.so.5.0 00:03:13.091 LIB libspdk_trace.a 00:03:13.091 SYMLINK libspdk_notify.so 00:03:13.091 SO libspdk_trace.so.9.0 00:03:13.091 SYMLINK libspdk_trace.so 00:03:13.091 LIB libspdk_sock.a 00:03:13.350 SO libspdk_sock.so.8.0 00:03:13.350 SYMLINK libspdk_sock.so 00:03:13.350 CC lib/thread/iobuf.o 00:03:13.350 CC lib/thread/thread.o 00:03:13.350 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.608 CC lib/nvme/nvme_fabric.o 00:03:13.608 CC lib/nvme/nvme_ctrlr.o 00:03:13.608 CC lib/nvme/nvme_pcie_common.o 00:03:13.608 CC lib/nvme/nvme_ns.o 00:03:13.608 CC lib/nvme/nvme_pcie.o 00:03:13.608 CC lib/nvme/nvme_qpair.o 00:03:13.608 CC lib/nvme/nvme_ns_cmd.o 00:03:13.608 CC lib/nvme/nvme.o 00:03:14.174 CC lib/nvme/nvme_quirks.o 00:03:14.174 CC lib/nvme/nvme_transport.o 00:03:14.433 CC lib/nvme/nvme_discovery.o 00:03:14.433 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.433 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.433 CC lib/nvme/nvme_tcp.o 00:03:14.690 CC lib/nvme/nvme_opal.o 00:03:14.690 CC lib/nvme/nvme_io_msg.o 00:03:14.690 CC lib/nvme/nvme_poll_group.o 00:03:14.948 LIB libspdk_thread.a 00:03:14.948 SO libspdk_thread.so.9.0 00:03:14.948 CC lib/nvme/nvme_zns.o 00:03:14.948 SYMLINK libspdk_thread.so 00:03:14.948 CC lib/nvme/nvme_cuse.o 00:03:14.948 CC lib/nvme/nvme_vfio_user.o 00:03:14.948 CC lib/nvme/nvme_rdma.o 00:03:15.207 CC lib/accel/accel.o 00:03:15.207 CC lib/blob/blobstore.o 00:03:15.207 CC lib/blob/request.o 00:03:15.465 CC lib/blob/zeroes.o 00:03:15.465 CC lib/blob/blob_bs_dev.o 00:03:15.723 CC lib/accel/accel_rpc.o 00:03:15.723 CC lib/accel/accel_sw.o 00:03:15.723 CC lib/virtio/virtio.o 00:03:15.723 CC lib/init/json_config.o 00:03:15.723 CC lib/virtio/virtio_vhost_user.o 00:03:15.981 CC lib/vfu_tgt/tgt_endpoint.o 00:03:15.981 CC lib/vfu_tgt/tgt_rpc.o 00:03:15.981 CC lib/init/subsystem.o 00:03:15.981 CC lib/init/subsystem_rpc.o 00:03:15.981 CC lib/virtio/virtio_vfio_user.o 00:03:15.981 CC lib/virtio/virtio_pci.o 00:03:15.981 CC lib/init/rpc.o 00:03:16.240 LIB libspdk_accel.a 00:03:16.240 LIB libspdk_vfu_tgt.a 00:03:16.240 SO libspdk_accel.so.14.0 00:03:16.240 SO libspdk_vfu_tgt.so.2.0 00:03:16.240 LIB libspdk_init.a 00:03:16.240 SYMLINK libspdk_accel.so 00:03:16.240 SYMLINK libspdk_vfu_tgt.so 00:03:16.240 SO libspdk_init.so.4.0 00:03:16.240 SYMLINK libspdk_init.so 00:03:16.499 LIB libspdk_virtio.a 00:03:16.499 LIB libspdk_nvme.a 00:03:16.499 CC lib/bdev/bdev.o 00:03:16.499 CC lib/bdev/bdev_rpc.o 00:03:16.499 CC lib/bdev/bdev_zone.o 00:03:16.499 CC lib/bdev/part.o 00:03:16.499 CC lib/bdev/scsi_nvme.o 00:03:16.499 SO libspdk_virtio.so.6.0 00:03:16.499 CC lib/event/app.o 00:03:16.499 CC lib/event/reactor.o 00:03:16.499 SYMLINK libspdk_virtio.so 00:03:16.499 CC lib/event/log_rpc.o 00:03:16.499 SO libspdk_nvme.so.12.0 00:03:16.758 CC lib/event/app_rpc.o 00:03:16.758 CC lib/event/scheduler_static.o 00:03:16.758 SYMLINK libspdk_nvme.so 00:03:17.016 LIB libspdk_event.a 00:03:17.016 SO libspdk_event.so.12.0 00:03:17.016 SYMLINK libspdk_event.so 00:03:17.977 LIB libspdk_blob.a 00:03:17.977 SO libspdk_blob.so.10.1 00:03:18.235 SYMLINK libspdk_blob.so 00:03:18.235 CC lib/blobfs/blobfs.o 00:03:18.235 CC lib/lvol/lvol.o 00:03:18.235 CC lib/blobfs/tree.o 00:03:19.171 LIB libspdk_bdev.a 00:03:19.171 SO libspdk_bdev.so.14.0 00:03:19.171 LIB libspdk_blobfs.a 00:03:19.171 SO libspdk_blobfs.so.9.0 00:03:19.430 SYMLINK libspdk_bdev.so 00:03:19.430 LIB libspdk_lvol.a 00:03:19.430 SYMLINK libspdk_blobfs.so 00:03:19.430 SO libspdk_lvol.so.9.1 00:03:19.430 CC lib/scsi/dev.o 00:03:19.430 CC lib/scsi/lun.o 00:03:19.430 CC lib/scsi/port.o 00:03:19.430 CC lib/scsi/scsi.o 00:03:19.430 CC lib/ublk/ublk.o 00:03:19.430 CC lib/ublk/ublk_rpc.o 00:03:19.430 CC lib/nbd/nbd.o 00:03:19.430 CC lib/ftl/ftl_core.o 00:03:19.430 CC lib/nvmf/ctrlr.o 00:03:19.430 SYMLINK libspdk_lvol.so 00:03:19.430 CC lib/scsi/scsi_bdev.o 00:03:19.689 CC lib/nvmf/ctrlr_discovery.o 00:03:19.689 CC lib/scsi/scsi_pr.o 00:03:19.689 CC lib/ftl/ftl_init.o 00:03:19.689 CC lib/ftl/ftl_layout.o 00:03:19.689 CC lib/ftl/ftl_debug.o 00:03:19.947 CC lib/nvmf/ctrlr_bdev.o 00:03:19.947 CC lib/nvmf/subsystem.o 00:03:19.947 CC lib/nbd/nbd_rpc.o 00:03:19.947 CC lib/nvmf/nvmf.o 00:03:19.947 CC lib/scsi/scsi_rpc.o 00:03:19.947 CC lib/ftl/ftl_io.o 00:03:19.947 LIB libspdk_nbd.a 00:03:20.205 CC lib/ftl/ftl_sb.o 00:03:20.205 SO libspdk_nbd.so.6.0 00:03:20.205 LIB libspdk_ublk.a 00:03:20.205 CC lib/scsi/task.o 00:03:20.205 CC lib/nvmf/nvmf_rpc.o 00:03:20.205 SYMLINK libspdk_nbd.so 00:03:20.206 SO libspdk_ublk.so.2.0 00:03:20.206 CC lib/nvmf/transport.o 00:03:20.206 SYMLINK libspdk_ublk.so 00:03:20.206 CC lib/ftl/ftl_l2p.o 00:03:20.206 CC lib/nvmf/tcp.o 00:03:20.206 CC lib/ftl/ftl_l2p_flat.o 00:03:20.464 LIB libspdk_scsi.a 00:03:20.464 SO libspdk_scsi.so.8.0 00:03:20.464 CC lib/ftl/ftl_nv_cache.o 00:03:20.464 SYMLINK libspdk_scsi.so 00:03:20.464 CC lib/ftl/ftl_band.o 00:03:20.464 CC lib/nvmf/vfio_user.o 00:03:20.464 CC lib/ftl/ftl_band_ops.o 00:03:21.030 CC lib/nvmf/rdma.o 00:03:21.030 CC lib/ftl/ftl_writer.o 00:03:21.030 CC lib/ftl/ftl_rq.o 00:03:21.030 CC lib/iscsi/conn.o 00:03:21.030 CC lib/vhost/vhost.o 00:03:21.030 CC lib/ftl/ftl_reloc.o 00:03:21.289 CC lib/ftl/ftl_l2p_cache.o 00:03:21.289 CC lib/iscsi/init_grp.o 00:03:21.289 CC lib/ftl/ftl_p2l.o 00:03:21.547 CC lib/vhost/vhost_rpc.o 00:03:21.547 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.547 CC lib/iscsi/iscsi.o 00:03:21.547 CC lib/iscsi/md5.o 00:03:21.547 CC lib/iscsi/param.o 00:03:21.805 CC lib/vhost/vhost_scsi.o 00:03:21.805 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:21.805 CC lib/vhost/vhost_blk.o 00:03:21.805 CC lib/vhost/rte_vhost_user.o 00:03:21.805 CC lib/iscsi/portal_grp.o 00:03:22.064 CC lib/iscsi/tgt_node.o 00:03:22.064 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.064 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.321 CC lib/iscsi/iscsi_subsystem.o 00:03:22.321 CC lib/iscsi/iscsi_rpc.o 00:03:22.321 CC lib/iscsi/task.o 00:03:22.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.321 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:22.321 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:22.579 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:22.579 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:22.579 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:22.837 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:22.837 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:22.837 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:22.837 CC lib/ftl/utils/ftl_conf.o 00:03:22.837 CC lib/ftl/utils/ftl_md.o 00:03:22.837 CC lib/ftl/utils/ftl_mempool.o 00:03:22.837 LIB libspdk_iscsi.a 00:03:22.837 LIB libspdk_vhost.a 00:03:23.095 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.096 CC lib/ftl/utils/ftl_property.o 00:03:23.096 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.096 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.096 SO libspdk_vhost.so.7.1 00:03:23.096 SO libspdk_iscsi.so.7.0 00:03:23.096 LIB libspdk_nvmf.a 00:03:23.096 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.096 SYMLINK libspdk_vhost.so 00:03:23.096 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.096 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.096 SYMLINK libspdk_iscsi.so 00:03:23.096 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.354 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.354 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.354 SO libspdk_nvmf.so.17.0 00:03:23.354 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.354 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.354 CC lib/ftl/base/ftl_base_dev.o 00:03:23.354 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.354 CC lib/ftl/ftl_trace.o 00:03:23.354 SYMLINK libspdk_nvmf.so 00:03:23.612 LIB libspdk_ftl.a 00:03:23.872 SO libspdk_ftl.so.8.0 00:03:24.131 SYMLINK libspdk_ftl.so 00:03:24.390 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.390 CC module/vfu_device/vfu_virtio.o 00:03:24.390 CC module/accel/dsa/accel_dsa.o 00:03:24.390 CC module/blob/bdev/blob_bdev.o 00:03:24.390 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.390 CC module/sock/posix/posix.o 00:03:24.390 CC module/sock/uring/uring.o 00:03:24.390 CC module/accel/error/accel_error.o 00:03:24.390 CC module/accel/iaa/accel_iaa.o 00:03:24.390 CC module/accel/ioat/accel_ioat.o 00:03:24.390 LIB libspdk_env_dpdk_rpc.a 00:03:24.648 SO libspdk_env_dpdk_rpc.so.5.0 00:03:24.648 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.648 CC module/accel/error/accel_error_rpc.o 00:03:24.648 LIB libspdk_scheduler_dynamic.a 00:03:24.648 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.648 SO libspdk_scheduler_dynamic.so.3.0 00:03:24.648 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.648 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.648 LIB libspdk_blob_bdev.a 00:03:24.648 SYMLINK libspdk_scheduler_dynamic.so 00:03:24.648 CC module/vfu_device/vfu_virtio_blk.o 00:03:24.648 LIB libspdk_accel_error.a 00:03:24.648 SO libspdk_blob_bdev.so.10.1 00:03:24.906 SO libspdk_accel_error.so.1.0 00:03:24.906 LIB libspdk_accel_ioat.a 00:03:24.906 LIB libspdk_accel_iaa.a 00:03:24.906 SYMLINK libspdk_blob_bdev.so 00:03:24.906 CC module/vfu_device/vfu_virtio_scsi.o 00:03:24.906 SO libspdk_accel_ioat.so.5.0 00:03:24.906 LIB libspdk_accel_dsa.a 00:03:24.906 SO libspdk_accel_iaa.so.2.0 00:03:24.906 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.906 SYMLINK libspdk_accel_error.so 00:03:24.906 CC module/vfu_device/vfu_virtio_rpc.o 00:03:24.906 SO libspdk_accel_dsa.so.4.0 00:03:24.906 SYMLINK libspdk_accel_ioat.so 00:03:24.906 SYMLINK libspdk_accel_dsa.so 00:03:24.906 SYMLINK libspdk_accel_iaa.so 00:03:24.906 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.165 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:25.165 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.165 CC module/bdev/delay/vbdev_delay.o 00:03:25.165 CC module/bdev/error/vbdev_error.o 00:03:25.165 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.165 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.165 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.165 LIB libspdk_sock_uring.a 00:03:25.165 CC module/bdev/gpt/gpt.o 00:03:25.165 SO libspdk_sock_uring.so.4.0 00:03:25.165 LIB libspdk_vfu_device.a 00:03:25.165 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.165 LIB libspdk_sock_posix.a 00:03:25.165 LIB libspdk_scheduler_gscheduler.a 00:03:25.165 SYMLINK libspdk_sock_uring.so 00:03:25.165 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.165 SO libspdk_vfu_device.so.2.0 00:03:25.165 SO libspdk_sock_posix.so.5.0 00:03:25.165 SO libspdk_scheduler_gscheduler.so.3.0 00:03:25.423 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.423 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.423 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.423 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.423 SYMLINK libspdk_vfu_device.so 00:03:25.423 SYMLINK libspdk_sock_posix.so 00:03:25.423 LIB libspdk_bdev_error.a 00:03:25.423 SO libspdk_bdev_error.so.5.0 00:03:25.423 CC module/bdev/malloc/bdev_malloc.o 00:03:25.423 LIB libspdk_blobfs_bdev.a 00:03:25.423 LIB libspdk_bdev_delay.a 00:03:25.682 CC module/bdev/null/bdev_null.o 00:03:25.682 SO libspdk_blobfs_bdev.so.5.0 00:03:25.682 SO libspdk_bdev_delay.so.5.0 00:03:25.682 SYMLINK libspdk_bdev_error.so 00:03:25.682 CC module/bdev/nvme/bdev_nvme.o 00:03:25.682 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.682 CC module/bdev/null/bdev_null_rpc.o 00:03:25.682 SYMLINK libspdk_blobfs_bdev.so 00:03:25.682 SYMLINK libspdk_bdev_delay.so 00:03:25.682 CC module/bdev/nvme/nvme_rpc.o 00:03:25.682 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.682 LIB libspdk_bdev_gpt.a 00:03:25.682 SO libspdk_bdev_gpt.so.5.0 00:03:25.682 LIB libspdk_bdev_lvol.a 00:03:25.682 CC module/bdev/raid/bdev_raid.o 00:03:25.682 SO libspdk_bdev_lvol.so.5.0 00:03:25.682 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.940 SYMLINK libspdk_bdev_gpt.so 00:03:25.940 LIB libspdk_bdev_null.a 00:03:25.940 SYMLINK libspdk_bdev_lvol.so 00:03:25.940 SO libspdk_bdev_null.so.5.0 00:03:25.940 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.940 CC module/bdev/nvme/vbdev_opal.o 00:03:25.940 CC module/bdev/split/vbdev_split.o 00:03:25.940 SYMLINK libspdk_bdev_null.so 00:03:25.940 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:25.940 LIB libspdk_bdev_malloc.a 00:03:25.940 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.940 SO libspdk_bdev_malloc.so.5.0 00:03:26.198 CC module/bdev/uring/bdev_uring.o 00:03:26.198 SYMLINK libspdk_bdev_malloc.so 00:03:26.198 CC module/bdev/aio/bdev_aio.o 00:03:26.198 LIB libspdk_bdev_passthru.a 00:03:26.198 SO libspdk_bdev_passthru.so.5.0 00:03:26.198 CC module/bdev/ftl/bdev_ftl.o 00:03:26.198 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.198 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.198 SYMLINK libspdk_bdev_passthru.so 00:03:26.198 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.457 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.457 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.457 CC module/bdev/uring/bdev_uring_rpc.o 00:03:26.457 LIB libspdk_bdev_zone_block.a 00:03:26.457 LIB libspdk_bdev_split.a 00:03:26.457 SO libspdk_bdev_zone_block.so.5.0 00:03:26.457 SO libspdk_bdev_split.so.5.0 00:03:26.457 SYMLINK libspdk_bdev_zone_block.so 00:03:26.457 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.715 CC module/bdev/raid/raid0.o 00:03:26.716 SYMLINK libspdk_bdev_split.so 00:03:26.716 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.716 CC module/bdev/raid/raid1.o 00:03:26.716 LIB libspdk_bdev_uring.a 00:03:26.716 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.716 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.716 SO libspdk_bdev_uring.so.5.0 00:03:26.716 LIB libspdk_bdev_aio.a 00:03:26.716 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.716 SO libspdk_bdev_aio.so.5.0 00:03:26.716 SYMLINK libspdk_bdev_uring.so 00:03:26.716 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.716 LIB libspdk_bdev_ftl.a 00:03:26.716 SYMLINK libspdk_bdev_aio.so 00:03:26.716 CC module/bdev/raid/concat.o 00:03:26.716 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.716 SO libspdk_bdev_ftl.so.5.0 00:03:26.974 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:26.974 SYMLINK libspdk_bdev_ftl.so 00:03:26.975 LIB libspdk_bdev_raid.a 00:03:26.975 LIB libspdk_bdev_iscsi.a 00:03:27.233 SO libspdk_bdev_raid.so.5.0 00:03:27.233 SO libspdk_bdev_iscsi.so.5.0 00:03:27.233 SYMLINK libspdk_bdev_iscsi.so 00:03:27.233 SYMLINK libspdk_bdev_raid.so 00:03:27.233 LIB libspdk_bdev_virtio.a 00:03:27.233 SO libspdk_bdev_virtio.so.5.0 00:03:27.491 SYMLINK libspdk_bdev_virtio.so 00:03:28.058 LIB libspdk_bdev_nvme.a 00:03:28.058 SO libspdk_bdev_nvme.so.6.0 00:03:28.058 SYMLINK libspdk_bdev_nvme.so 00:03:28.317 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.317 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.317 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.317 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.317 CC module/event/subsystems/sock/sock.o 00:03:28.317 CC module/event/subsystems/vmd/vmd.o 00:03:28.317 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:28.317 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.576 LIB libspdk_event_scheduler.a 00:03:28.576 LIB libspdk_event_sock.a 00:03:28.576 LIB libspdk_event_vhost_blk.a 00:03:28.576 SO libspdk_event_scheduler.so.3.0 00:03:28.576 SO libspdk_event_sock.so.4.0 00:03:28.576 SO libspdk_event_vhost_blk.so.2.0 00:03:28.576 LIB libspdk_event_vfu_tgt.a 00:03:28.576 SO libspdk_event_vfu_tgt.so.2.0 00:03:28.576 SYMLINK libspdk_event_vhost_blk.so 00:03:28.576 LIB libspdk_event_vmd.a 00:03:28.576 SYMLINK libspdk_event_sock.so 00:03:28.835 SYMLINK libspdk_event_scheduler.so 00:03:28.835 LIB libspdk_event_iobuf.a 00:03:28.835 SO libspdk_event_vmd.so.5.0 00:03:28.835 SYMLINK libspdk_event_vfu_tgt.so 00:03:28.835 SO libspdk_event_iobuf.so.2.0 00:03:28.835 SYMLINK libspdk_event_vmd.so 00:03:28.835 SYMLINK libspdk_event_iobuf.so 00:03:29.093 CC module/event/subsystems/accel/accel.o 00:03:29.093 LIB libspdk_event_accel.a 00:03:29.093 SO libspdk_event_accel.so.5.0 00:03:29.351 SYMLINK libspdk_event_accel.so 00:03:29.351 CC module/event/subsystems/bdev/bdev.o 00:03:29.610 LIB libspdk_event_bdev.a 00:03:29.610 SO libspdk_event_bdev.so.5.0 00:03:29.610 SYMLINK libspdk_event_bdev.so 00:03:29.868 CC module/event/subsystems/scsi/scsi.o 00:03:29.868 CC module/event/subsystems/nbd/nbd.o 00:03:29.868 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.869 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.869 CC module/event/subsystems/ublk/ublk.o 00:03:29.869 LIB libspdk_event_ublk.a 00:03:30.127 LIB libspdk_event_nbd.a 00:03:30.127 SO libspdk_event_ublk.so.2.0 00:03:30.127 LIB libspdk_event_scsi.a 00:03:30.127 SO libspdk_event_nbd.so.5.0 00:03:30.127 SO libspdk_event_scsi.so.5.0 00:03:30.127 SYMLINK libspdk_event_ublk.so 00:03:30.127 LIB libspdk_event_nvmf.a 00:03:30.127 SYMLINK libspdk_event_nbd.so 00:03:30.127 SYMLINK libspdk_event_scsi.so 00:03:30.127 SO libspdk_event_nvmf.so.5.0 00:03:30.127 SYMLINK libspdk_event_nvmf.so 00:03:30.413 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.413 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.413 LIB libspdk_event_vhost_scsi.a 00:03:30.413 SO libspdk_event_vhost_scsi.so.2.0 00:03:30.413 LIB libspdk_event_iscsi.a 00:03:30.413 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.743 SO libspdk_event_iscsi.so.5.0 00:03:30.743 SYMLINK libspdk_event_iscsi.so 00:03:30.743 SO libspdk.so.5.0 00:03:30.743 SYMLINK libspdk.so 00:03:30.743 CC app/spdk_lspci/spdk_lspci.o 00:03:30.743 CC app/trace_record/trace_record.o 00:03:31.001 CXX app/trace/trace.o 00:03:31.001 CC app/nvmf_tgt/nvmf_main.o 00:03:31.001 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.001 CC examples/accel/perf/accel_perf.o 00:03:31.001 CC app/spdk_tgt/spdk_tgt.o 00:03:31.001 CC test/app/bdev_svc/bdev_svc.o 00:03:31.001 CC test/accel/dif/dif.o 00:03:31.001 CC test/bdev/bdevio/bdevio.o 00:03:31.001 LINK spdk_lspci 00:03:31.260 LINK nvmf_tgt 00:03:31.260 LINK spdk_trace_record 00:03:31.260 LINK bdev_svc 00:03:31.260 LINK spdk_tgt 00:03:31.260 LINK iscsi_tgt 00:03:31.518 LINK spdk_trace 00:03:31.518 CC app/spdk_nvme_perf/perf.o 00:03:31.518 LINK dif 00:03:31.518 LINK bdevio 00:03:31.518 LINK accel_perf 00:03:31.518 TEST_HEADER include/spdk/accel.h 00:03:31.518 TEST_HEADER include/spdk/accel_module.h 00:03:31.518 TEST_HEADER include/spdk/assert.h 00:03:31.518 TEST_HEADER include/spdk/barrier.h 00:03:31.518 TEST_HEADER include/spdk/base64.h 00:03:31.518 TEST_HEADER include/spdk/bdev.h 00:03:31.518 TEST_HEADER include/spdk/bdev_module.h 00:03:31.518 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.518 TEST_HEADER include/spdk/bit_array.h 00:03:31.518 TEST_HEADER include/spdk/bit_pool.h 00:03:31.518 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.518 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.518 TEST_HEADER include/spdk/blobfs.h 00:03:31.518 TEST_HEADER include/spdk/blob.h 00:03:31.518 CC app/spdk_nvme_identify/identify.o 00:03:31.518 TEST_HEADER include/spdk/conf.h 00:03:31.518 TEST_HEADER include/spdk/config.h 00:03:31.518 TEST_HEADER include/spdk/cpuset.h 00:03:31.518 TEST_HEADER include/spdk/crc16.h 00:03:31.518 TEST_HEADER include/spdk/crc32.h 00:03:31.518 TEST_HEADER include/spdk/crc64.h 00:03:31.518 TEST_HEADER include/spdk/dif.h 00:03:31.518 TEST_HEADER include/spdk/dma.h 00:03:31.518 TEST_HEADER include/spdk/endian.h 00:03:31.518 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.518 TEST_HEADER include/spdk/env.h 00:03:31.518 TEST_HEADER include/spdk/event.h 00:03:31.518 TEST_HEADER include/spdk/fd_group.h 00:03:31.518 TEST_HEADER include/spdk/fd.h 00:03:31.518 TEST_HEADER include/spdk/file.h 00:03:31.518 TEST_HEADER include/spdk/ftl.h 00:03:31.519 CC test/blobfs/mkfs/mkfs.o 00:03:31.519 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.519 TEST_HEADER include/spdk/hexlify.h 00:03:31.519 TEST_HEADER include/spdk/histogram_data.h 00:03:31.519 TEST_HEADER include/spdk/idxd.h 00:03:31.519 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.519 TEST_HEADER include/spdk/init.h 00:03:31.519 TEST_HEADER include/spdk/ioat.h 00:03:31.519 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.778 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.778 TEST_HEADER include/spdk/json.h 00:03:31.778 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.778 TEST_HEADER include/spdk/likely.h 00:03:31.778 TEST_HEADER include/spdk/log.h 00:03:31.778 TEST_HEADER include/spdk/lvol.h 00:03:31.778 TEST_HEADER include/spdk/memory.h 00:03:31.778 TEST_HEADER include/spdk/mmio.h 00:03:31.778 TEST_HEADER include/spdk/nbd.h 00:03:31.778 TEST_HEADER include/spdk/notify.h 00:03:31.778 CC test/dma/test_dma/test_dma.o 00:03:31.778 TEST_HEADER include/spdk/nvme.h 00:03:31.778 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.778 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:31.778 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.778 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.778 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.778 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.778 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.778 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.778 TEST_HEADER include/spdk/nvmf.h 00:03:31.778 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.778 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.778 TEST_HEADER include/spdk/opal.h 00:03:31.778 TEST_HEADER include/spdk/opal_spec.h 00:03:31.778 TEST_HEADER include/spdk/pci_ids.h 00:03:31.778 TEST_HEADER include/spdk/pipe.h 00:03:31.778 TEST_HEADER include/spdk/queue.h 00:03:31.778 TEST_HEADER include/spdk/reduce.h 00:03:31.778 TEST_HEADER include/spdk/rpc.h 00:03:31.778 TEST_HEADER include/spdk/scheduler.h 00:03:31.778 TEST_HEADER include/spdk/scsi.h 00:03:31.778 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.778 TEST_HEADER include/spdk/sock.h 00:03:31.778 TEST_HEADER include/spdk/stdinc.h 00:03:31.778 TEST_HEADER include/spdk/string.h 00:03:31.778 TEST_HEADER include/spdk/thread.h 00:03:31.778 TEST_HEADER include/spdk/trace.h 00:03:31.778 TEST_HEADER include/spdk/trace_parser.h 00:03:31.778 TEST_HEADER include/spdk/tree.h 00:03:31.778 TEST_HEADER include/spdk/ublk.h 00:03:31.778 TEST_HEADER include/spdk/util.h 00:03:31.778 TEST_HEADER include/spdk/uuid.h 00:03:31.778 TEST_HEADER include/spdk/version.h 00:03:31.778 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.778 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.778 TEST_HEADER include/spdk/vhost.h 00:03:31.778 TEST_HEADER include/spdk/vmd.h 00:03:31.778 TEST_HEADER include/spdk/xor.h 00:03:31.778 TEST_HEADER include/spdk/zipf.h 00:03:31.778 CXX test/cpp_headers/accel.o 00:03:31.778 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.778 CC test/app/histogram_perf/histogram_perf.o 00:03:31.778 LINK mkfs 00:03:31.778 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.778 CC examples/blob/hello_world/hello_blob.o 00:03:32.037 CXX test/cpp_headers/accel_module.o 00:03:32.037 LINK histogram_perf 00:03:32.037 LINK test_dma 00:03:32.037 LINK nvme_fuzz 00:03:32.037 CXX test/cpp_headers/assert.o 00:03:32.037 LINK hello_bdev 00:03:32.037 LINK hello_blob 00:03:32.037 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.295 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:32.295 CXX test/cpp_headers/barrier.o 00:03:32.295 CXX test/cpp_headers/base64.o 00:03:32.295 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:32.295 CC test/env/vtophys/vtophys.o 00:03:32.295 LINK spdk_nvme_perf 00:03:32.295 LINK mem_callbacks 00:03:32.552 LINK spdk_nvme_identify 00:03:32.552 CC examples/bdev/bdevperf/bdevperf.o 00:03:32.552 CXX test/cpp_headers/bdev.o 00:03:32.552 CC examples/blob/cli/blobcli.o 00:03:32.552 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.552 LINK vtophys 00:03:32.552 CC test/env/memory/memory_ut.o 00:03:32.552 CC test/env/pci/pci_ut.o 00:03:32.809 CXX test/cpp_headers/bdev_module.o 00:03:32.809 LINK env_dpdk_post_init 00:03:32.809 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.809 CXX test/cpp_headers/bdev_zone.o 00:03:32.809 LINK vhost_fuzz 00:03:32.809 CXX test/cpp_headers/bit_array.o 00:03:33.066 LINK spdk_nvme_discover 00:03:33.066 CC test/app/jsoncat/jsoncat.o 00:03:33.066 CC test/app/stub/stub.o 00:03:33.067 LINK blobcli 00:03:33.067 CC app/spdk_top/spdk_top.o 00:03:33.067 LINK pci_ut 00:03:33.067 LINK jsoncat 00:03:33.067 CXX test/cpp_headers/bit_pool.o 00:03:33.067 LINK stub 00:03:33.325 LINK bdevperf 00:03:33.325 CC app/vhost/vhost.o 00:03:33.325 CXX test/cpp_headers/blob_bdev.o 00:03:33.325 CC examples/ioat/perf/perf.o 00:03:33.325 CC examples/nvme/reconnect/reconnect.o 00:03:33.325 CC examples/nvme/hello_world/hello_world.o 00:03:33.325 LINK vhost 00:03:33.583 CC app/spdk_dd/spdk_dd.o 00:03:33.583 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.583 CC app/fio/nvme/fio_plugin.o 00:03:33.583 LINK memory_ut 00:03:33.583 LINK ioat_perf 00:03:33.583 LINK hello_world 00:03:33.841 CXX test/cpp_headers/blobfs.o 00:03:33.841 CC examples/ioat/verify/verify.o 00:03:33.841 LINK reconnect 00:03:33.841 LINK iscsi_fuzz 00:03:33.841 LINK spdk_dd 00:03:33.841 LINK spdk_top 00:03:33.841 CXX test/cpp_headers/blob.o 00:03:33.841 CC test/event/event_perf/event_perf.o 00:03:33.841 CC examples/sock/hello_world/hello_sock.o 00:03:34.100 CC test/lvol/esnap/esnap.o 00:03:34.100 LINK verify 00:03:34.100 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.100 CXX test/cpp_headers/conf.o 00:03:34.100 LINK event_perf 00:03:34.100 CC examples/nvme/arbitration/arbitration.o 00:03:34.100 CC examples/nvme/hotplug/hotplug.o 00:03:34.100 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.100 LINK hello_sock 00:03:34.358 LINK spdk_nvme 00:03:34.358 CXX test/cpp_headers/config.o 00:03:34.358 CC test/nvme/aer/aer.o 00:03:34.358 CXX test/cpp_headers/cpuset.o 00:03:34.358 CC test/event/reactor/reactor.o 00:03:34.358 LINK cmb_copy 00:03:34.358 LINK hotplug 00:03:34.358 CC app/fio/bdev/fio_plugin.o 00:03:34.358 CC test/event/reactor_perf/reactor_perf.o 00:03:34.358 CXX test/cpp_headers/crc16.o 00:03:34.616 LINK reactor 00:03:34.616 LINK arbitration 00:03:34.616 CXX test/cpp_headers/crc32.o 00:03:34.616 LINK nvme_manage 00:03:34.616 LINK aer 00:03:34.616 CXX test/cpp_headers/crc64.o 00:03:34.616 LINK reactor_perf 00:03:34.616 CC test/event/app_repeat/app_repeat.o 00:03:34.616 CC test/rpc_client/rpc_client_test.o 00:03:34.874 CXX test/cpp_headers/dif.o 00:03:34.874 CC test/event/scheduler/scheduler.o 00:03:34.874 CC test/nvme/reset/reset.o 00:03:34.874 CC test/thread/poller_perf/poller_perf.o 00:03:34.874 CC examples/nvme/abort/abort.o 00:03:34.874 CC test/nvme/sgl/sgl.o 00:03:34.874 LINK app_repeat 00:03:34.874 LINK spdk_bdev 00:03:34.874 CXX test/cpp_headers/dma.o 00:03:34.874 LINK rpc_client_test 00:03:34.874 LINK poller_perf 00:03:35.133 LINK scheduler 00:03:35.133 CXX test/cpp_headers/endian.o 00:03:35.133 LINK sgl 00:03:35.133 LINK reset 00:03:35.133 CC test/nvme/e2edp/nvme_dp.o 00:03:35.133 CXX test/cpp_headers/env_dpdk.o 00:03:35.133 CC test/nvme/overhead/overhead.o 00:03:35.133 LINK abort 00:03:35.133 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.392 CXX test/cpp_headers/env.o 00:03:35.392 CC test/nvme/err_injection/err_injection.o 00:03:35.392 CC test/nvme/startup/startup.o 00:03:35.392 CC test/nvme/reserve/reserve.o 00:03:35.392 CC test/nvme/simple_copy/simple_copy.o 00:03:35.392 LINK nvme_dp 00:03:35.392 LINK overhead 00:03:35.392 LINK pmr_persistence 00:03:35.392 CC test/nvme/connect_stress/connect_stress.o 00:03:35.392 CXX test/cpp_headers/event.o 00:03:35.650 LINK startup 00:03:35.650 LINK err_injection 00:03:35.650 LINK reserve 00:03:35.650 CXX test/cpp_headers/fd_group.o 00:03:35.650 LINK simple_copy 00:03:35.650 CC test/nvme/boot_partition/boot_partition.o 00:03:35.650 LINK connect_stress 00:03:35.650 CXX test/cpp_headers/fd.o 00:03:35.908 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.908 CC test/nvme/compliance/nvme_compliance.o 00:03:35.908 CC examples/vmd/led/led.o 00:03:35.908 CC examples/nvmf/nvmf/nvmf.o 00:03:35.908 LINK boot_partition 00:03:35.908 CC examples/util/zipf/zipf.o 00:03:35.908 LINK lsvmd 00:03:35.908 CXX test/cpp_headers/file.o 00:03:35.908 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.908 CC examples/thread/thread/thread_ex.o 00:03:35.908 LINK led 00:03:36.167 LINK zipf 00:03:36.167 CXX test/cpp_headers/ftl.o 00:03:36.167 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.167 LINK nvme_compliance 00:03:36.167 LINK fused_ordering 00:03:36.167 CC test/nvme/fdp/fdp.o 00:03:36.167 LINK nvmf 00:03:36.167 LINK thread 00:03:36.167 CC test/nvme/cuse/cuse.o 00:03:36.425 CXX test/cpp_headers/gpt_spec.o 00:03:36.425 LINK doorbell_aers 00:03:36.425 CXX test/cpp_headers/hexlify.o 00:03:36.425 CC examples/idxd/perf/perf.o 00:03:36.425 CXX test/cpp_headers/histogram_data.o 00:03:36.425 CXX test/cpp_headers/idxd.o 00:03:36.425 CXX test/cpp_headers/idxd_spec.o 00:03:36.425 CXX test/cpp_headers/init.o 00:03:36.425 CXX test/cpp_headers/ioat.o 00:03:36.425 LINK fdp 00:03:36.426 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.426 CXX test/cpp_headers/ioat_spec.o 00:03:36.684 CXX test/cpp_headers/iscsi_spec.o 00:03:36.684 CXX test/cpp_headers/json.o 00:03:36.684 CXX test/cpp_headers/jsonrpc.o 00:03:36.684 CXX test/cpp_headers/likely.o 00:03:36.684 CXX test/cpp_headers/log.o 00:03:36.684 LINK idxd_perf 00:03:36.684 CXX test/cpp_headers/lvol.o 00:03:36.684 LINK interrupt_tgt 00:03:36.684 CXX test/cpp_headers/memory.o 00:03:36.942 CXX test/cpp_headers/mmio.o 00:03:36.942 CXX test/cpp_headers/nbd.o 00:03:36.942 CXX test/cpp_headers/notify.o 00:03:36.942 CXX test/cpp_headers/nvme.o 00:03:36.942 CXX test/cpp_headers/nvme_intel.o 00:03:36.942 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.942 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.942 CXX test/cpp_headers/nvme_spec.o 00:03:36.942 CXX test/cpp_headers/nvme_zns.o 00:03:36.942 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.942 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.942 CXX test/cpp_headers/nvmf.o 00:03:37.200 CXX test/cpp_headers/nvmf_spec.o 00:03:37.200 CXX test/cpp_headers/nvmf_transport.o 00:03:37.200 CXX test/cpp_headers/opal.o 00:03:37.200 CXX test/cpp_headers/opal_spec.o 00:03:37.200 CXX test/cpp_headers/pci_ids.o 00:03:37.200 CXX test/cpp_headers/pipe.o 00:03:37.200 CXX test/cpp_headers/queue.o 00:03:37.200 CXX test/cpp_headers/reduce.o 00:03:37.200 CXX test/cpp_headers/rpc.o 00:03:37.200 CXX test/cpp_headers/scheduler.o 00:03:37.200 CXX test/cpp_headers/scsi.o 00:03:37.200 CXX test/cpp_headers/scsi_spec.o 00:03:37.200 CXX test/cpp_headers/sock.o 00:03:37.201 CXX test/cpp_headers/stdinc.o 00:03:37.201 LINK cuse 00:03:37.459 CXX test/cpp_headers/string.o 00:03:37.459 CXX test/cpp_headers/thread.o 00:03:37.459 CXX test/cpp_headers/trace.o 00:03:37.459 CXX test/cpp_headers/trace_parser.o 00:03:37.459 CXX test/cpp_headers/tree.o 00:03:37.459 CXX test/cpp_headers/ublk.o 00:03:37.459 CXX test/cpp_headers/util.o 00:03:37.459 CXX test/cpp_headers/uuid.o 00:03:37.459 CXX test/cpp_headers/version.o 00:03:37.459 CXX test/cpp_headers/vfio_user_pci.o 00:03:37.459 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.459 CXX test/cpp_headers/vhost.o 00:03:37.717 CXX test/cpp_headers/vmd.o 00:03:37.717 CXX test/cpp_headers/xor.o 00:03:37.717 CXX test/cpp_headers/zipf.o 00:03:39.094 LINK esnap 00:03:39.094 00:03:39.094 real 1m1.144s 00:03:39.094 user 6m34.877s 00:03:39.094 sys 1m23.365s 00:03:39.094 08:54:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:39.094 08:54:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.094 ************************************ 00:03:39.094 END TEST make 00:03:39.094 ************************************ 00:03:39.353 08:54:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:39.353 08:54:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:39.353 08:54:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:39.353 08:54:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:39.353 08:54:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:39.353 08:54:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:39.353 08:54:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:39.353 08:54:16 -- scripts/common.sh@335 -- # IFS=.-: 00:03:39.353 08:54:16 -- scripts/common.sh@335 -- # read -ra ver1 00:03:39.353 08:54:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.353 08:54:16 -- scripts/common.sh@336 -- # read -ra ver2 00:03:39.353 08:54:16 -- scripts/common.sh@337 -- # local 'op=<' 00:03:39.353 08:54:16 -- scripts/common.sh@339 -- # ver1_l=2 00:03:39.353 08:54:16 -- scripts/common.sh@340 -- # ver2_l=1 00:03:39.354 08:54:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:39.354 08:54:16 -- scripts/common.sh@343 -- # case "$op" in 00:03:39.354 08:54:16 -- scripts/common.sh@344 -- # : 1 00:03:39.354 08:54:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:39.354 08:54:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.354 08:54:16 -- scripts/common.sh@364 -- # decimal 1 00:03:39.354 08:54:16 -- scripts/common.sh@352 -- # local d=1 00:03:39.354 08:54:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.354 08:54:16 -- scripts/common.sh@354 -- # echo 1 00:03:39.354 08:54:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:39.354 08:54:16 -- scripts/common.sh@365 -- # decimal 2 00:03:39.354 08:54:16 -- scripts/common.sh@352 -- # local d=2 00:03:39.354 08:54:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.354 08:54:16 -- scripts/common.sh@354 -- # echo 2 00:03:39.354 08:54:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:39.354 08:54:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:39.354 08:54:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:39.354 08:54:16 -- scripts/common.sh@367 -- # return 0 00:03:39.354 08:54:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.354 08:54:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:39.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.354 --rc genhtml_branch_coverage=1 00:03:39.354 --rc genhtml_function_coverage=1 00:03:39.354 --rc genhtml_legend=1 00:03:39.354 --rc geninfo_all_blocks=1 00:03:39.354 --rc geninfo_unexecuted_blocks=1 00:03:39.354 00:03:39.354 ' 00:03:39.354 08:54:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:39.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.354 --rc genhtml_branch_coverage=1 00:03:39.354 --rc genhtml_function_coverage=1 00:03:39.354 --rc genhtml_legend=1 00:03:39.354 --rc geninfo_all_blocks=1 00:03:39.354 --rc geninfo_unexecuted_blocks=1 00:03:39.354 00:03:39.354 ' 00:03:39.354 08:54:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:39.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.354 --rc genhtml_branch_coverage=1 00:03:39.354 --rc genhtml_function_coverage=1 00:03:39.354 --rc genhtml_legend=1 00:03:39.354 --rc geninfo_all_blocks=1 00:03:39.354 --rc geninfo_unexecuted_blocks=1 00:03:39.354 00:03:39.354 ' 00:03:39.354 08:54:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:39.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.354 --rc genhtml_branch_coverage=1 00:03:39.354 --rc genhtml_function_coverage=1 00:03:39.354 --rc genhtml_legend=1 00:03:39.354 --rc geninfo_all_blocks=1 00:03:39.354 --rc geninfo_unexecuted_blocks=1 00:03:39.354 00:03:39.354 ' 00:03:39.354 08:54:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:39.354 08:54:16 -- nvmf/common.sh@7 -- # uname -s 00:03:39.354 08:54:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.354 08:54:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.354 08:54:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.354 08:54:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.354 08:54:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.354 08:54:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.354 08:54:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.354 08:54:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.354 08:54:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.354 08:54:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.354 08:54:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:03:39.354 08:54:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:03:39.354 08:54:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.354 08:54:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.354 08:54:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:39.354 08:54:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:39.354 08:54:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.354 08:54:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.354 08:54:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.354 08:54:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.354 08:54:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.354 08:54:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.354 08:54:16 -- paths/export.sh@5 -- # export PATH 00:03:39.354 08:54:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.354 08:54:16 -- nvmf/common.sh@46 -- # : 0 00:03:39.354 08:54:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:39.354 08:54:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:39.354 08:54:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:39.354 08:54:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.354 08:54:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.354 08:54:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:39.354 08:54:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:39.354 08:54:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:39.354 08:54:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.354 08:54:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.354 08:54:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.354 08:54:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.354 08:54:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.354 08:54:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.354 08:54:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.354 08:54:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:39.614 08:54:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:39.614 08:54:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:39.614 08:54:16 -- spdk/autotest.sh@48 -- # udevadm_pid=48025 00:03:39.614 08:54:16 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.614 08:54:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:39.614 08:54:16 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.614 08:54:16 -- spdk/autotest.sh@54 -- # echo 48048 00:03:39.614 08:54:16 -- spdk/autotest.sh@56 -- # echo 48051 00:03:39.614 08:54:16 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.614 08:54:16 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:39.614 08:54:16 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.614 08:54:16 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:39.614 08:54:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:39.614 08:54:16 -- common/autotest_common.sh@10 -- # set +x 00:03:39.614 08:54:16 -- spdk/autotest.sh@70 -- # create_test_list 00:03:39.614 08:54:16 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:39.614 08:54:16 -- common/autotest_common.sh@10 -- # set +x 00:03:39.614 08:54:16 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.614 08:54:16 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.614 08:54:16 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.614 08:54:16 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.614 08:54:16 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.614 08:54:16 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:39.614 08:54:16 -- common/autotest_common.sh@1450 -- # uname 00:03:39.614 08:54:16 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:39.614 08:54:16 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:39.614 08:54:16 -- common/autotest_common.sh@1470 -- # uname 00:03:39.614 08:54:16 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:39.614 08:54:16 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:39.614 08:54:16 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:39.614 lcov: LCOV version 1.15 00:03:39.614 08:54:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:47.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:47.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:47.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:47.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:47.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:47.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:09.671 08:54:44 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:09.671 08:54:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.671 08:54:44 -- common/autotest_common.sh@10 -- # set +x 00:04:09.671 08:54:44 -- spdk/autotest.sh@89 -- # rm -f 00:04:09.671 08:54:44 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.671 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:09.671 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:09.671 08:54:45 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:09.671 08:54:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:09.671 08:54:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:09.671 08:54:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:09.671 08:54:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.671 08:54:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:09.671 08:54:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:09.671 08:54:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.671 08:54:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:09.671 08:54:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:09.671 08:54:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.671 08:54:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:09.671 08:54:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:09.671 08:54:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.671 08:54:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:09.671 08:54:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:09.671 08:54:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:09.672 08:54:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:09.672 08:54:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:09.672 08:54:45 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # grep -v p 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:09.672 08:54:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:09.672 08:54:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:09.672 08:54:45 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:09.672 08:54:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:09.672 No valid GPT data, bailing 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # pt= 00:04:09.672 08:54:45 -- scripts/common.sh@394 -- # return 1 00:04:09.672 08:54:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:09.672 1+0 records in 00:04:09.672 1+0 records out 00:04:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442425 s, 237 MB/s 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:09.672 08:54:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:09.672 08:54:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:09.672 08:54:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:09.672 08:54:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:09.672 No valid GPT data, bailing 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # pt= 00:04:09.672 08:54:45 -- scripts/common.sh@394 -- # return 1 00:04:09.672 08:54:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:09.672 1+0 records in 00:04:09.672 1+0 records out 00:04:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453234 s, 231 MB/s 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:09.672 08:54:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:09.672 08:54:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:09.672 08:54:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:09.672 08:54:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:09.672 No valid GPT data, bailing 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # pt= 00:04:09.672 08:54:45 -- scripts/common.sh@394 -- # return 1 00:04:09.672 08:54:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:09.672 1+0 records in 00:04:09.672 1+0 records out 00:04:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392028 s, 267 MB/s 00:04:09.672 08:54:45 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:09.672 08:54:45 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:09.672 08:54:45 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:09.672 08:54:45 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:09.672 08:54:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:09.672 No valid GPT data, bailing 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:09.672 08:54:45 -- scripts/common.sh@393 -- # pt= 00:04:09.672 08:54:45 -- scripts/common.sh@394 -- # return 1 00:04:09.672 08:54:45 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:09.672 1+0 records in 00:04:09.672 1+0 records out 00:04:09.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00370653 s, 283 MB/s 00:04:09.672 08:54:45 -- spdk/autotest.sh@116 -- # sync 00:04:09.672 08:54:46 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:09.672 08:54:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:09.672 08:54:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.051 08:54:47 -- spdk/autotest.sh@122 -- # uname -s 00:04:11.051 08:54:47 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:11.051 08:54:47 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:11.051 08:54:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.051 08:54:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.051 08:54:47 -- common/autotest_common.sh@10 -- # set +x 00:04:11.051 ************************************ 00:04:11.051 START TEST setup.sh 00:04:11.051 ************************************ 00:04:11.051 08:54:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:11.051 * Looking for test storage... 00:04:11.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.051 08:54:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.051 08:54:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.051 08:54:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.310 08:54:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.310 08:54:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.310 08:54:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.310 08:54:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.310 08:54:47 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.310 08:54:47 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.310 08:54:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.310 08:54:47 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.310 08:54:47 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.310 08:54:47 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.310 08:54:47 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.310 08:54:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.310 08:54:47 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.311 08:54:47 -- scripts/common.sh@344 -- # : 1 00:04:11.311 08:54:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.311 08:54:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.311 08:54:47 -- scripts/common.sh@364 -- # decimal 1 00:04:11.311 08:54:47 -- scripts/common.sh@352 -- # local d=1 00:04:11.311 08:54:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.311 08:54:47 -- scripts/common.sh@354 -- # echo 1 00:04:11.311 08:54:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.311 08:54:47 -- scripts/common.sh@365 -- # decimal 2 00:04:11.311 08:54:47 -- scripts/common.sh@352 -- # local d=2 00:04:11.311 08:54:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.311 08:54:47 -- scripts/common.sh@354 -- # echo 2 00:04:11.311 08:54:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.311 08:54:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.311 08:54:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.311 08:54:47 -- scripts/common.sh@367 -- # return 0 00:04:11.311 08:54:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.311 08:54:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:47 -- setup/test-setup.sh@10 -- # uname -s 00:04:11.311 08:54:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:11.311 08:54:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:11.311 08:54:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.311 08:54:48 -- common/autotest_common.sh@10 -- # set +x 00:04:11.311 ************************************ 00:04:11.311 START TEST acl 00:04:11.311 ************************************ 00:04:11.311 08:54:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:11.311 * Looking for test storage... 00:04:11.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.311 08:54:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.311 08:54:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.311 08:54:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.311 08:54:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.311 08:54:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.311 08:54:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.311 08:54:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.311 08:54:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.311 08:54:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.311 08:54:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.311 08:54:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.311 08:54:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.311 08:54:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.311 08:54:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.311 08:54:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.311 08:54:48 -- scripts/common.sh@344 -- # : 1 00:04:11.311 08:54:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.311 08:54:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.311 08:54:48 -- scripts/common.sh@364 -- # decimal 1 00:04:11.311 08:54:48 -- scripts/common.sh@352 -- # local d=1 00:04:11.311 08:54:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.311 08:54:48 -- scripts/common.sh@354 -- # echo 1 00:04:11.311 08:54:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.311 08:54:48 -- scripts/common.sh@365 -- # decimal 2 00:04:11.311 08:54:48 -- scripts/common.sh@352 -- # local d=2 00:04:11.311 08:54:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.311 08:54:48 -- scripts/common.sh@354 -- # echo 2 00:04:11.311 08:54:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.311 08:54:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.311 08:54:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.311 08:54:48 -- scripts/common.sh@367 -- # return 0 00:04:11.311 08:54:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.311 --rc genhtml_branch_coverage=1 00:04:11.311 --rc genhtml_function_coverage=1 00:04:11.311 --rc genhtml_legend=1 00:04:11.311 --rc geninfo_all_blocks=1 00:04:11.311 --rc geninfo_unexecuted_blocks=1 00:04:11.311 00:04:11.311 ' 00:04:11.311 08:54:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:11.311 08:54:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:11.311 08:54:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:11.311 08:54:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:11.311 08:54:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.311 08:54:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:11.311 08:54:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:11.311 08:54:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.311 08:54:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.311 08:54:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.311 08:54:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:11.311 08:54:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:11.311 08:54:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.311 08:54:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.311 08:54:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.311 08:54:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:11.311 08:54:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:11.312 08:54:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:11.312 08:54:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.312 08:54:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.312 08:54:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:11.312 08:54:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:11.312 08:54:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:11.312 08:54:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.312 08:54:48 -- setup/acl.sh@12 -- # devs=() 00:04:11.312 08:54:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:11.312 08:54:48 -- setup/acl.sh@13 -- # drivers=() 00:04:11.312 08:54:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:11.312 08:54:48 -- setup/acl.sh@51 -- # setup reset 00:04:11.312 08:54:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.312 08:54:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.248 08:54:48 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:12.248 08:54:48 -- setup/acl.sh@16 -- # local dev driver 00:04:12.248 08:54:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.248 08:54:48 -- setup/acl.sh@15 -- # setup output status 00:04:12.248 08:54:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.249 08:54:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.249 Hugepages 00:04:12.249 node hugesize free / total 00:04:12.249 08:54:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:12.249 08:54:49 -- setup/acl.sh@19 -- # continue 00:04:12.249 08:54:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.249 00:04:12.249 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.249 08:54:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:12.249 08:54:49 -- setup/acl.sh@19 -- # continue 00:04:12.249 08:54:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.249 08:54:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:12.249 08:54:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:12.249 08:54:49 -- setup/acl.sh@20 -- # continue 00:04:12.249 08:54:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.507 08:54:49 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:12.507 08:54:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.507 08:54:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:12.507 08:54:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.507 08:54:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.507 08:54:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.507 08:54:49 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:12.507 08:54:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.507 08:54:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:12.507 08:54:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.507 08:54:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.507 08:54:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.507 08:54:49 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:12.507 08:54:49 -- setup/acl.sh@54 -- # run_test denied denied 00:04:12.507 08:54:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.507 08:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.507 08:54:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.507 ************************************ 00:04:12.507 START TEST denied 00:04:12.507 ************************************ 00:04:12.507 08:54:49 -- common/autotest_common.sh@1114 -- # denied 00:04:12.507 08:54:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:12.507 08:54:49 -- setup/acl.sh@38 -- # setup output config 00:04:12.507 08:54:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:12.507 08:54:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.507 08:54:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.444 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:13.444 08:54:50 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:13.444 08:54:50 -- setup/acl.sh@28 -- # local dev driver 00:04:13.444 08:54:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:13.444 08:54:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:13.444 08:54:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:13.444 08:54:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:13.444 08:54:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:13.444 08:54:50 -- setup/acl.sh@41 -- # setup reset 00:04:13.444 08:54:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.444 08:54:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.012 00:04:14.012 real 0m1.410s 00:04:14.012 user 0m0.575s 00:04:14.012 sys 0m0.799s 00:04:14.012 08:54:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.012 08:54:50 -- common/autotest_common.sh@10 -- # set +x 00:04:14.012 ************************************ 00:04:14.012 END TEST denied 00:04:14.012 ************************************ 00:04:14.012 08:54:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:14.012 08:54:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.012 08:54:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.012 08:54:50 -- common/autotest_common.sh@10 -- # set +x 00:04:14.012 ************************************ 00:04:14.012 START TEST allowed 00:04:14.012 ************************************ 00:04:14.012 08:54:50 -- common/autotest_common.sh@1114 -- # allowed 00:04:14.012 08:54:50 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:14.012 08:54:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:14.012 08:54:50 -- setup/acl.sh@45 -- # setup output config 00:04:14.012 08:54:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.012 08:54:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.959 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.959 08:54:51 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:14.959 08:54:51 -- setup/acl.sh@28 -- # local dev driver 00:04:14.959 08:54:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.959 08:54:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:14.959 08:54:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:14.959 08:54:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.959 08:54:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.959 08:54:51 -- setup/acl.sh@48 -- # setup reset 00:04:14.959 08:54:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.959 08:54:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.528 00:04:15.528 real 0m1.500s 00:04:15.528 user 0m0.708s 00:04:15.528 sys 0m0.793s 00:04:15.528 08:54:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.528 08:54:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.528 ************************************ 00:04:15.528 END TEST allowed 00:04:15.528 ************************************ 00:04:15.528 00:04:15.528 real 0m4.277s 00:04:15.528 user 0m1.950s 00:04:15.528 sys 0m2.315s 00:04:15.528 08:54:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.528 08:54:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.528 ************************************ 00:04:15.528 END TEST acl 00:04:15.528 ************************************ 00:04:15.528 08:54:52 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:15.528 08:54:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.528 08:54:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.528 08:54:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.528 ************************************ 00:04:15.528 START TEST hugepages 00:04:15.528 ************************************ 00:04:15.528 08:54:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:15.528 * Looking for test storage... 00:04:15.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.528 08:54:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:15.528 08:54:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:15.528 08:54:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:15.789 08:54:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:15.789 08:54:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:15.789 08:54:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:15.789 08:54:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:15.789 08:54:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:15.789 08:54:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:15.789 08:54:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.789 08:54:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:15.789 08:54:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:15.789 08:54:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:15.789 08:54:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:15.789 08:54:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:15.789 08:54:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:15.789 08:54:52 -- scripts/common.sh@344 -- # : 1 00:04:15.789 08:54:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:15.789 08:54:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.789 08:54:52 -- scripts/common.sh@364 -- # decimal 1 00:04:15.789 08:54:52 -- scripts/common.sh@352 -- # local d=1 00:04:15.789 08:54:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.789 08:54:52 -- scripts/common.sh@354 -- # echo 1 00:04:15.789 08:54:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:15.789 08:54:52 -- scripts/common.sh@365 -- # decimal 2 00:04:15.789 08:54:52 -- scripts/common.sh@352 -- # local d=2 00:04:15.789 08:54:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.789 08:54:52 -- scripts/common.sh@354 -- # echo 2 00:04:15.789 08:54:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:15.789 08:54:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:15.789 08:54:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:15.789 08:54:52 -- scripts/common.sh@367 -- # return 0 00:04:15.789 08:54:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.789 08:54:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.789 --rc genhtml_branch_coverage=1 00:04:15.789 --rc genhtml_function_coverage=1 00:04:15.789 --rc genhtml_legend=1 00:04:15.789 --rc geninfo_all_blocks=1 00:04:15.789 --rc geninfo_unexecuted_blocks=1 00:04:15.789 00:04:15.789 ' 00:04:15.789 08:54:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:15.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.789 --rc genhtml_branch_coverage=1 00:04:15.789 --rc genhtml_function_coverage=1 00:04:15.789 --rc genhtml_legend=1 00:04:15.789 --rc geninfo_all_blocks=1 00:04:15.789 --rc geninfo_unexecuted_blocks=1 00:04:15.789 00:04:15.789 ' 00:04:15.789 08:54:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:15.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.790 --rc genhtml_branch_coverage=1 00:04:15.790 --rc genhtml_function_coverage=1 00:04:15.790 --rc genhtml_legend=1 00:04:15.790 --rc geninfo_all_blocks=1 00:04:15.790 --rc geninfo_unexecuted_blocks=1 00:04:15.790 00:04:15.790 ' 00:04:15.790 08:54:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:15.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.790 --rc genhtml_branch_coverage=1 00:04:15.790 --rc genhtml_function_coverage=1 00:04:15.790 --rc genhtml_legend=1 00:04:15.790 --rc geninfo_all_blocks=1 00:04:15.790 --rc geninfo_unexecuted_blocks=1 00:04:15.790 00:04:15.790 ' 00:04:15.790 08:54:52 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:15.790 08:54:52 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:15.790 08:54:52 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:15.790 08:54:52 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:15.790 08:54:52 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:15.790 08:54:52 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:15.790 08:54:52 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:15.790 08:54:52 -- setup/common.sh@18 -- # local node= 00:04:15.790 08:54:52 -- setup/common.sh@19 -- # local var val 00:04:15.790 08:54:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.790 08:54:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.790 08:54:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.790 08:54:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.790 08:54:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.790 08:54:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 5953120 kB' 'MemAvailable: 7348552 kB' 'Buffers: 3200 kB' 'Cached: 1608432 kB' 'SwapCached: 0 kB' 'Active: 459112 kB' 'Inactive: 1269600 kB' 'Active(anon): 127588 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 364 kB' 'Writeback: 0 kB' 'AnonPages: 118668 kB' 'Mapped: 53348 kB' 'Shmem: 10508 kB' 'KReclaimable: 62512 kB' 'Slab: 156140 kB' 'SReclaimable: 62512 kB' 'SUnreclaim: 93628 kB' 'KernelStack: 6480 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 321340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.790 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.790 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.791 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.791 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # continue 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.792 08:54:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.792 08:54:52 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.792 08:54:52 -- setup/common.sh@33 -- # echo 2048 00:04:15.792 08:54:52 -- setup/common.sh@33 -- # return 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:15.792 08:54:52 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:15.792 08:54:52 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:15.792 08:54:52 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:15.792 08:54:52 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:15.792 08:54:52 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:15.792 08:54:52 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:15.792 08:54:52 -- setup/hugepages.sh@207 -- # get_nodes 00:04:15.792 08:54:52 -- setup/hugepages.sh@27 -- # local node 00:04:15.792 08:54:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.792 08:54:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:15.792 08:54:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.792 08:54:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.792 08:54:52 -- setup/hugepages.sh@208 -- # clear_hp 00:04:15.792 08:54:52 -- setup/hugepages.sh@37 -- # local node hp 00:04:15.792 08:54:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.792 08:54:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.792 08:54:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.792 08:54:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.792 08:54:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.792 08:54:52 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:15.792 08:54:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.792 08:54:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.792 08:54:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.792 ************************************ 00:04:15.792 START TEST default_setup 00:04:15.792 ************************************ 00:04:15.792 08:54:52 -- common/autotest_common.sh@1114 -- # default_setup 00:04:15.792 08:54:52 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.792 08:54:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.792 08:54:52 -- setup/hugepages.sh@51 -- # shift 00:04:15.792 08:54:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.792 08:54:52 -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.792 08:54:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.792 08:54:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.792 08:54:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.792 08:54:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.792 08:54:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.792 08:54:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:15.792 08:54:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.792 08:54:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.792 08:54:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.792 08:54:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.792 08:54:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.792 08:54:52 -- setup/hugepages.sh@73 -- # return 0 00:04:15.792 08:54:52 -- setup/hugepages.sh@137 -- # setup output 00:04:15.792 08:54:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.792 08:54:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.622 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.622 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.622 08:54:53 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:16.622 08:54:53 -- setup/hugepages.sh@89 -- # local node 00:04:16.622 08:54:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.622 08:54:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.622 08:54:53 -- setup/hugepages.sh@92 -- # local surp 00:04:16.622 08:54:53 -- setup/hugepages.sh@93 -- # local resv 00:04:16.622 08:54:53 -- setup/hugepages.sh@94 -- # local anon 00:04:16.622 08:54:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.622 08:54:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.622 08:54:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.622 08:54:53 -- setup/common.sh@18 -- # local node= 00:04:16.622 08:54:53 -- setup/common.sh@19 -- # local var val 00:04:16.622 08:54:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.622 08:54:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.622 08:54:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.622 08:54:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.622 08:54:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.622 08:54:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.622 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.622 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8059656 kB' 'MemAvailable: 9454912 kB' 'Buffers: 3200 kB' 'Cached: 1608424 kB' 'SwapCached: 0 kB' 'Active: 461052 kB' 'Inactive: 1269612 kB' 'Active(anon): 129528 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 120644 kB' 'Mapped: 53524 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155804 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6432 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.623 08:54:53 -- setup/common.sh@33 -- # echo 0 00:04:16.623 08:54:53 -- setup/common.sh@33 -- # return 0 00:04:16.623 08:54:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:16.623 08:54:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.623 08:54:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.623 08:54:53 -- setup/common.sh@18 -- # local node= 00:04:16.623 08:54:53 -- setup/common.sh@19 -- # local var val 00:04:16.623 08:54:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.623 08:54:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.623 08:54:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.623 08:54:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.623 08:54:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.623 08:54:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8059752 kB' 'MemAvailable: 9455012 kB' 'Buffers: 3200 kB' 'Cached: 1608424 kB' 'SwapCached: 0 kB' 'Active: 460508 kB' 'Inactive: 1269616 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 53524 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155784 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93652 kB' 'KernelStack: 6452 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.623 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.623 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.624 08:54:53 -- setup/common.sh@33 -- # echo 0 00:04:16.624 08:54:53 -- setup/common.sh@33 -- # return 0 00:04:16.624 08:54:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.624 08:54:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.624 08:54:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.624 08:54:53 -- setup/common.sh@18 -- # local node= 00:04:16.624 08:54:53 -- setup/common.sh@19 -- # local var val 00:04:16.624 08:54:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.624 08:54:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.624 08:54:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.624 08:54:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.624 08:54:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.624 08:54:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8059752 kB' 'MemAvailable: 9455016 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460112 kB' 'Inactive: 1269620 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155768 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93636 kB' 'KernelStack: 6464 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.624 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.624 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.625 08:54:53 -- setup/common.sh@33 -- # echo 0 00:04:16.625 08:54:53 -- setup/common.sh@33 -- # return 0 00:04:16.625 08:54:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.625 nr_hugepages=1024 00:04:16.625 08:54:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.625 resv_hugepages=0 00:04:16.625 08:54:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.625 surplus_hugepages=0 00:04:16.625 08:54:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.625 anon_hugepages=0 00:04:16.625 08:54:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.625 08:54:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.625 08:54:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.625 08:54:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.625 08:54:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.625 08:54:53 -- setup/common.sh@18 -- # local node= 00:04:16.625 08:54:53 -- setup/common.sh@19 -- # local var val 00:04:16.625 08:54:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.625 08:54:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.625 08:54:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.625 08:54:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.625 08:54:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.625 08:54:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8059752 kB' 'MemAvailable: 9455016 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460164 kB' 'Inactive: 1269620 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119808 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6432 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.625 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.625 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.626 08:54:53 -- setup/common.sh@33 -- # echo 1024 00:04:16.626 08:54:53 -- setup/common.sh@33 -- # return 0 00:04:16.626 08:54:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.626 08:54:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.626 08:54:53 -- setup/hugepages.sh@27 -- # local node 00:04:16.626 08:54:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.626 08:54:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.626 08:54:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.626 08:54:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.626 08:54:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.626 08:54:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.626 08:54:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.626 08:54:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.626 08:54:53 -- setup/common.sh@18 -- # local node=0 00:04:16.626 08:54:53 -- setup/common.sh@19 -- # local var val 00:04:16.626 08:54:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.626 08:54:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.626 08:54:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.626 08:54:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.626 08:54:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.626 08:54:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8059752 kB' 'MemUsed: 4179364 kB' 'SwapCached: 0 kB' 'Active: 460140 kB' 'Inactive: 1269620 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53352 kB' 'AnonPages: 119788 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.626 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.626 08:54:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.885 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # continue 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.886 08:54:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.886 08:54:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.886 08:54:53 -- setup/common.sh@33 -- # echo 0 00:04:16.886 08:54:53 -- setup/common.sh@33 -- # return 0 00:04:16.886 08:54:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.886 08:54:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.886 08:54:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.886 08:54:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.886 node0=1024 expecting 1024 00:04:16.886 08:54:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.886 08:54:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.886 00:04:16.886 real 0m0.976s 00:04:16.886 user 0m0.474s 00:04:16.886 sys 0m0.440s 00:04:16.886 08:54:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.886 08:54:53 -- common/autotest_common.sh@10 -- # set +x 00:04:16.886 ************************************ 00:04:16.886 END TEST default_setup 00:04:16.886 ************************************ 00:04:16.886 08:54:53 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.886 08:54:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.886 08:54:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.886 08:54:53 -- common/autotest_common.sh@10 -- # set +x 00:04:16.886 ************************************ 00:04:16.886 START TEST per_node_1G_alloc 00:04:16.886 ************************************ 00:04:16.886 08:54:53 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:16.886 08:54:53 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.886 08:54:53 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:16.886 08:54:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.886 08:54:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.886 08:54:53 -- setup/hugepages.sh@51 -- # shift 00:04:16.886 08:54:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.886 08:54:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.886 08:54:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.886 08:54:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.886 08:54:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.886 08:54:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.886 08:54:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.886 08:54:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.886 08:54:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.886 08:54:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.886 08:54:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.886 08:54:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.886 08:54:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.886 08:54:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.886 08:54:53 -- setup/hugepages.sh@73 -- # return 0 00:04:16.886 08:54:53 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.886 08:54:53 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:16.886 08:54:53 -- setup/hugepages.sh@146 -- # setup output 00:04:16.886 08:54:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.886 08:54:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.147 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.147 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.147 08:54:53 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:17.147 08:54:53 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:17.147 08:54:53 -- setup/hugepages.sh@89 -- # local node 00:04:17.147 08:54:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.147 08:54:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.147 08:54:53 -- setup/hugepages.sh@92 -- # local surp 00:04:17.147 08:54:53 -- setup/hugepages.sh@93 -- # local resv 00:04:17.147 08:54:53 -- setup/hugepages.sh@94 -- # local anon 00:04:17.147 08:54:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.147 08:54:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.147 08:54:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.147 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.147 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.147 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.147 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.147 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.147 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.147 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.147 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.147 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9104836 kB' 'MemAvailable: 10500100 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460676 kB' 'Inactive: 1269620 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 120292 kB' 'Mapped: 53456 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155720 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93588 kB' 'KernelStack: 6424 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.147 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.147 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.147 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.148 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.148 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.148 08:54:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.148 08:54:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.148 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.148 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.148 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.148 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.148 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.148 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.148 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.148 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.148 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9104836 kB' 'MemAvailable: 10500100 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460252 kB' 'Inactive: 1269620 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155752 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93620 kB' 'KernelStack: 6448 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.149 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.149 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.150 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.150 08:54:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.150 08:54:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.150 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.150 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.150 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.150 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.150 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.150 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.150 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.150 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.150 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9105096 kB' 'MemAvailable: 10500360 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460472 kB' 'Inactive: 1269620 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 120104 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6480 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.413 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.413 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.413 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.413 nr_hugepages=512 00:04:17.413 resv_hugepages=0 00:04:17.413 surplus_hugepages=0 00:04:17.413 anon_hugepages=0 00:04:17.413 08:54:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.413 08:54:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:17.413 08:54:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.413 08:54:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.413 08:54:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.413 08:54:54 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:17.413 08:54:54 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:17.413 08:54:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.413 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.413 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.413 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.413 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.413 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.413 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.413 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.413 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.413 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.413 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9105172 kB' 'MemAvailable: 10500436 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460504 kB' 'Inactive: 1269620 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 120144 kB' 'Mapped: 53404 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'KernelStack: 6432 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.414 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.414 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.415 08:54:54 -- setup/common.sh@33 -- # echo 512 00:04:17.415 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.415 08:54:54 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:17.415 08:54:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.415 08:54:54 -- setup/hugepages.sh@27 -- # local node 00:04:17.415 08:54:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.415 08:54:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.415 08:54:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.415 08:54:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.415 08:54:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.415 08:54:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.415 08:54:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.415 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.415 08:54:54 -- setup/common.sh@18 -- # local node=0 00:04:17.415 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.415 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.415 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.415 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.415 08:54:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.415 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.415 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9105172 kB' 'MemUsed: 3133944 kB' 'SwapCached: 0 kB' 'Active: 460168 kB' 'Inactive: 1269620 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53352 kB' 'AnonPages: 119808 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155732 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.415 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.415 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.416 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.416 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.416 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.416 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.416 08:54:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.416 08:54:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.416 08:54:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.416 node0=512 expecting 512 00:04:17.416 08:54:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.416 08:54:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.416 00:04:17.416 real 0m0.534s 00:04:17.416 user 0m0.261s 00:04:17.416 sys 0m0.309s 00:04:17.416 08:54:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.416 08:54:54 -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 ************************************ 00:04:17.416 END TEST per_node_1G_alloc 00:04:17.416 ************************************ 00:04:17.416 08:54:54 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:17.416 08:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.416 08:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.416 08:54:54 -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 ************************************ 00:04:17.416 START TEST even_2G_alloc 00:04:17.416 ************************************ 00:04:17.416 08:54:54 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:17.416 08:54:54 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:17.416 08:54:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.416 08:54:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.416 08:54:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.416 08:54:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.416 08:54:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.416 08:54:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.416 08:54:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.416 08:54:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.416 08:54:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.416 08:54:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:17.416 08:54:54 -- setup/hugepages.sh@83 -- # : 0 00:04:17.416 08:54:54 -- setup/hugepages.sh@84 -- # : 0 00:04:17.416 08:54:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.416 08:54:54 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:17.416 08:54:54 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:17.416 08:54:54 -- setup/hugepages.sh@153 -- # setup output 00:04:17.416 08:54:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.416 08:54:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.695 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.695 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.695 08:54:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.695 08:54:54 -- setup/hugepages.sh@89 -- # local node 00:04:17.695 08:54:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.695 08:54:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.695 08:54:54 -- setup/hugepages.sh@92 -- # local surp 00:04:17.695 08:54:54 -- setup/hugepages.sh@93 -- # local resv 00:04:17.695 08:54:54 -- setup/hugepages.sh@94 -- # local anon 00:04:17.695 08:54:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.695 08:54:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.695 08:54:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.695 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.695 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.695 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.695 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.695 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.695 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.695 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.695 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8051484 kB' 'MemAvailable: 9446748 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460648 kB' 'Inactive: 1269620 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 120284 kB' 'Mapped: 53484 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155728 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93596 kB' 'KernelStack: 6472 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.695 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.695 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.696 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.696 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.696 08:54:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.696 08:54:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.696 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.696 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.696 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.696 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.696 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.696 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.696 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.696 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.696 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8051484 kB' 'MemAvailable: 9446748 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460320 kB' 'Inactive: 1269620 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 53484 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155708 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93576 kB' 'KernelStack: 6408 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.696 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.696 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.697 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.697 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.974 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.974 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.974 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.974 08:54:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.974 08:54:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.974 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.974 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.974 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.974 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.974 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.974 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.974 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.974 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.974 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.974 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8051484 kB' 'MemAvailable: 9446748 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460256 kB' 'Inactive: 1269620 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 53392 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155724 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93592 kB' 'KernelStack: 6416 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.975 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.975 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.976 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.976 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.976 08:54:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.976 nr_hugepages=1024 00:04:17.976 08:54:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.976 resv_hugepages=0 00:04:17.976 08:54:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.976 surplus_hugepages=0 00:04:17.976 08:54:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.976 anon_hugepages=0 00:04:17.976 08:54:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.976 08:54:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.976 08:54:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.976 08:54:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.976 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.976 08:54:54 -- setup/common.sh@18 -- # local node= 00:04:17.976 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.976 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.976 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.976 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.976 08:54:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.976 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.976 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8051484 kB' 'MemAvailable: 9446748 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460220 kB' 'Inactive: 1269620 kB' 'Active(anon): 128696 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 53392 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155724 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93592 kB' 'KernelStack: 6416 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.976 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.976 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.977 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.977 08:54:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.978 08:54:54 -- setup/common.sh@33 -- # echo 1024 00:04:17.978 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.978 08:54:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.978 08:54:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.978 08:54:54 -- setup/hugepages.sh@27 -- # local node 00:04:17.978 08:54:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.978 08:54:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.978 08:54:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.978 08:54:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.978 08:54:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.978 08:54:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.978 08:54:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.978 08:54:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.978 08:54:54 -- setup/common.sh@18 -- # local node=0 00:04:17.978 08:54:54 -- setup/common.sh@19 -- # local var val 00:04:17.978 08:54:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.978 08:54:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.978 08:54:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.978 08:54:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.978 08:54:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.978 08:54:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8051484 kB' 'MemUsed: 4187632 kB' 'SwapCached: 0 kB' 'Active: 460148 kB' 'Inactive: 1269620 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 388 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53392 kB' 'AnonPages: 119724 kB' 'Shmem: 10484 kB' 'KernelStack: 6400 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155724 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.978 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.978 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # continue 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.979 08:54:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.979 08:54:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.979 08:54:54 -- setup/common.sh@33 -- # echo 0 00:04:17.979 08:54:54 -- setup/common.sh@33 -- # return 0 00:04:17.979 08:54:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.979 08:54:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.979 08:54:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.979 08:54:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.979 08:54:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.979 node0=1024 expecting 1024 00:04:17.979 08:54:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.979 00:04:17.979 real 0m0.523s 00:04:17.979 user 0m0.264s 00:04:17.979 sys 0m0.293s 00:04:17.979 08:54:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.979 08:54:54 -- common/autotest_common.sh@10 -- # set +x 00:04:17.979 ************************************ 00:04:17.979 END TEST even_2G_alloc 00:04:17.979 ************************************ 00:04:17.979 08:54:54 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.979 08:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.979 08:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.979 08:54:54 -- common/autotest_common.sh@10 -- # set +x 00:04:17.979 ************************************ 00:04:17.979 START TEST odd_alloc 00:04:17.979 ************************************ 00:04:17.979 08:54:54 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:17.979 08:54:54 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.979 08:54:54 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.979 08:54:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.979 08:54:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.979 08:54:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.979 08:54:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.979 08:54:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.979 08:54:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.980 08:54:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.980 08:54:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.980 08:54:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.980 08:54:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.980 08:54:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.980 08:54:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.980 08:54:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.980 08:54:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:17.980 08:54:54 -- setup/hugepages.sh@83 -- # : 0 00:04:17.980 08:54:54 -- setup/hugepages.sh@84 -- # : 0 00:04:17.980 08:54:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.980 08:54:54 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.980 08:54:54 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.980 08:54:54 -- setup/hugepages.sh@160 -- # setup output 00:04:17.980 08:54:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.980 08:54:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.247 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.247 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.247 08:54:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:18.247 08:54:55 -- setup/hugepages.sh@89 -- # local node 00:04:18.247 08:54:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.247 08:54:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.247 08:54:55 -- setup/hugepages.sh@92 -- # local surp 00:04:18.247 08:54:55 -- setup/hugepages.sh@93 -- # local resv 00:04:18.247 08:54:55 -- setup/hugepages.sh@94 -- # local anon 00:04:18.247 08:54:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.247 08:54:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.247 08:54:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.247 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:18.247 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:18.247 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.247 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.247 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.247 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.247 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.247 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8049148 kB' 'MemAvailable: 9444412 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460872 kB' 'Inactive: 1269620 kB' 'Active(anon): 129348 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120260 kB' 'Mapped: 53516 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'KernelStack: 6472 kB' 'PageTables: 4728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.247 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.247 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.248 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.248 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.509 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:18.509 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:18.509 08:54:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:18.509 08:54:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.509 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.509 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:18.509 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:18.509 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.509 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.509 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.509 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.509 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.509 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8048896 kB' 'MemAvailable: 9444160 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460320 kB' 'Inactive: 1269620 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6432 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.509 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.509 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.510 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:18.510 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:18.510 08:54:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:18.510 08:54:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.510 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.510 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:18.510 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:18.510 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.510 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.510 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.510 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.510 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.510 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8048896 kB' 'MemAvailable: 9444160 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460252 kB' 'Inactive: 1269620 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6400 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.510 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.510 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.511 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.511 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.512 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:18.512 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:18.512 08:54:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:18.512 nr_hugepages=1025 00:04:18.512 08:54:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:18.512 resv_hugepages=0 00:04:18.512 08:54:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.512 surplus_hugepages=0 00:04:18.512 08:54:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.512 anon_hugepages=0 00:04:18.512 08:54:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.512 08:54:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.512 08:54:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:18.512 08:54:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.512 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.512 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:18.512 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:18.512 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.512 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.512 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.512 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.512 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.512 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8049268 kB' 'MemAvailable: 9444532 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460236 kB' 'Inactive: 1269620 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'KernelStack: 6400 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.512 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.512 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.513 08:54:55 -- setup/common.sh@33 -- # echo 1025 00:04:18.513 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:18.513 08:54:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.513 08:54:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.513 08:54:55 -- setup/hugepages.sh@27 -- # local node 00:04:18.513 08:54:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.513 08:54:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:18.513 08:54:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.513 08:54:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.513 08:54:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.513 08:54:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.513 08:54:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.513 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.513 08:54:55 -- setup/common.sh@18 -- # local node=0 00:04:18.513 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:18.513 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.513 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.513 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.513 08:54:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.513 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.513 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8049268 kB' 'MemUsed: 4189848 kB' 'SwapCached: 0 kB' 'Active: 460160 kB' 'Inactive: 1269620 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53352 kB' 'AnonPages: 119712 kB' 'Shmem: 10484 kB' 'KernelStack: 6436 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.513 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.513 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # continue 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.514 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.514 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.514 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:18.514 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:18.514 08:54:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.514 08:54:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.514 08:54:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.514 node0=1025 expecting 1025 00:04:18.514 08:54:55 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:18.514 08:54:55 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:18.514 00:04:18.514 real 0m0.531s 00:04:18.514 user 0m0.275s 00:04:18.514 sys 0m0.289s 00:04:18.514 08:54:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.514 08:54:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 ************************************ 00:04:18.514 END TEST odd_alloc 00:04:18.514 ************************************ 00:04:18.514 08:54:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:18.514 08:54:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.514 08:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.514 08:54:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.514 ************************************ 00:04:18.514 START TEST custom_alloc 00:04:18.514 ************************************ 00:04:18.514 08:54:55 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:18.514 08:54:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:18.514 08:54:55 -- setup/hugepages.sh@169 -- # local node 00:04:18.514 08:54:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:18.514 08:54:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:18.514 08:54:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:18.514 08:54:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:18.514 08:54:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.514 08:54:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.514 08:54:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.514 08:54:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.514 08:54:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.514 08:54:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.514 08:54:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.514 08:54:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@83 -- # : 0 00:04:18.514 08:54:55 -- setup/hugepages.sh@84 -- # : 0 00:04:18.514 08:54:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.514 08:54:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.514 08:54:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:18.514 08:54:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.514 08:54:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.514 08:54:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.514 08:54:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.514 08:54:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.514 08:54:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:18.514 08:54:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.514 08:54:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.514 08:54:55 -- setup/hugepages.sh@78 -- # return 0 00:04:18.514 08:54:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:18.514 08:54:55 -- setup/hugepages.sh@187 -- # setup output 00:04:18.514 08:54:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.514 08:54:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.035 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.035 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.035 08:54:55 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:19.035 08:54:55 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:19.035 08:54:55 -- setup/hugepages.sh@89 -- # local node 00:04:19.035 08:54:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.035 08:54:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.035 08:54:55 -- setup/hugepages.sh@92 -- # local surp 00:04:19.035 08:54:55 -- setup/hugepages.sh@93 -- # local resv 00:04:19.035 08:54:55 -- setup/hugepages.sh@94 -- # local anon 00:04:19.035 08:54:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.035 08:54:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.035 08:54:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.035 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:19.035 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:19.035 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.035 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.035 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.035 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.035 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.035 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9100020 kB' 'MemAvailable: 10495284 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460440 kB' 'Inactive: 1269620 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120284 kB' 'Mapped: 53580 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155708 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93576 kB' 'KernelStack: 6408 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.035 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.035 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.036 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:19.036 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:19.036 08:54:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.036 08:54:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.036 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.036 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:19.036 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:19.036 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.036 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.036 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.036 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.036 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.036 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9099768 kB' 'MemAvailable: 10495032 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460284 kB' 'Inactive: 1269620 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6432 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.036 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.036 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.037 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.037 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.038 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:19.038 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:19.038 08:54:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.038 08:54:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.038 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.038 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:19.038 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:19.038 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.038 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.038 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.038 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.038 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.038 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9099768 kB' 'MemAvailable: 10495032 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460304 kB' 'Inactive: 1269620 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6432 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.038 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.038 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.039 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:19.039 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:19.039 08:54:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.039 nr_hugepages=512 00:04:19.039 08:54:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:19.039 resv_hugepages=0 00:04:19.039 08:54:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.039 surplus_hugepages=0 00:04:19.039 08:54:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.039 anon_hugepages=0 00:04:19.039 08:54:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.039 08:54:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.039 08:54:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:19.039 08:54:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.039 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.039 08:54:55 -- setup/common.sh@18 -- # local node= 00:04:19.039 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:19.039 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.039 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.039 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.039 08:54:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.039 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.039 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9100092 kB' 'MemAvailable: 10495356 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460360 kB' 'Inactive: 1269620 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119976 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'KernelStack: 6448 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 08:54:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.041 08:54:55 -- setup/common.sh@33 -- # echo 512 00:04:19.041 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:19.041 08:54:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:19.041 08:54:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.041 08:54:55 -- setup/hugepages.sh@27 -- # local node 00:04:19.041 08:54:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.041 08:54:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.041 08:54:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.041 08:54:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.041 08:54:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.041 08:54:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.041 08:54:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.041 08:54:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.041 08:54:55 -- setup/common.sh@18 -- # local node=0 00:04:19.041 08:54:55 -- setup/common.sh@19 -- # local var val 00:04:19.041 08:54:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.041 08:54:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.041 08:54:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.041 08:54:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.041 08:54:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.041 08:54:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 9100092 kB' 'MemUsed: 3139024 kB' 'SwapCached: 0 kB' 'Active: 460296 kB' 'Inactive: 1269620 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53352 kB' 'AnonPages: 119860 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155736 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 08:54:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # continue 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 08:54:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 08:54:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 08:54:55 -- setup/common.sh@33 -- # echo 0 00:04:19.042 08:54:55 -- setup/common.sh@33 -- # return 0 00:04:19.042 08:54:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.042 08:54:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.042 08:54:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.042 08:54:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.042 node0=512 expecting 512 00:04:19.042 08:54:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.042 08:54:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:19.042 00:04:19.042 real 0m0.550s 00:04:19.042 user 0m0.263s 00:04:19.042 sys 0m0.298s 00:04:19.042 08:54:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.042 08:54:55 -- common/autotest_common.sh@10 -- # set +x 00:04:19.042 ************************************ 00:04:19.043 END TEST custom_alloc 00:04:19.043 ************************************ 00:04:19.043 08:54:55 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:19.043 08:54:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.043 08:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.043 08:54:55 -- common/autotest_common.sh@10 -- # set +x 00:04:19.043 ************************************ 00:04:19.043 START TEST no_shrink_alloc 00:04:19.043 ************************************ 00:04:19.043 08:54:55 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:19.043 08:54:55 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:19.043 08:54:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.043 08:54:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.043 08:54:55 -- setup/hugepages.sh@51 -- # shift 00:04:19.043 08:54:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.043 08:54:55 -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.043 08:54:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.043 08:54:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.043 08:54:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.043 08:54:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.043 08:54:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.043 08:54:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.043 08:54:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.043 08:54:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.043 08:54:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.043 08:54:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.043 08:54:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.043 08:54:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.043 08:54:55 -- setup/hugepages.sh@73 -- # return 0 00:04:19.043 08:54:55 -- setup/hugepages.sh@198 -- # setup output 00:04:19.043 08:54:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.043 08:54:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.613 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.613 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.613 08:54:56 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:19.613 08:54:56 -- setup/hugepages.sh@89 -- # local node 00:04:19.613 08:54:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.613 08:54:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.613 08:54:56 -- setup/hugepages.sh@92 -- # local surp 00:04:19.613 08:54:56 -- setup/hugepages.sh@93 -- # local resv 00:04:19.613 08:54:56 -- setup/hugepages.sh@94 -- # local anon 00:04:19.613 08:54:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.613 08:54:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.613 08:54:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.613 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:19.613 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:19.613 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.613 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.613 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.613 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.613 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.613 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8053516 kB' 'MemAvailable: 9448780 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460608 kB' 'Inactive: 1269620 kB' 'Active(anon): 129084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120172 kB' 'Mapped: 53436 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155704 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93572 kB' 'KernelStack: 6408 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.613 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.613 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.614 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:19.614 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:19.614 08:54:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.614 08:54:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.614 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.614 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:19.614 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:19.614 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.614 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.614 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.614 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.614 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.614 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8053516 kB' 'MemAvailable: 9448780 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460324 kB' 'Inactive: 1269620 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119892 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155728 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93596 kB' 'KernelStack: 6432 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.614 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.614 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.615 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.615 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.616 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:19.616 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:19.616 08:54:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.616 08:54:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.616 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.616 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:19.616 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:19.616 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.616 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.616 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.616 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.616 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.616 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8053516 kB' 'MemAvailable: 9448780 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460388 kB' 'Inactive: 1269620 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155728 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93596 kB' 'KernelStack: 6448 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.616 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.616 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.617 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:19.617 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:19.617 nr_hugepages=1024 00:04:19.617 resv_hugepages=0 00:04:19.617 surplus_hugepages=0 00:04:19.617 anon_hugepages=0 00:04:19.617 08:54:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.617 08:54:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.617 08:54:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.617 08:54:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.617 08:54:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.617 08:54:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.617 08:54:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.617 08:54:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.617 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.617 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:19.617 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:19.617 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.617 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.617 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.617 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.617 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.617 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8053516 kB' 'MemAvailable: 9448780 kB' 'Buffers: 3200 kB' 'Cached: 1608428 kB' 'SwapCached: 0 kB' 'Active: 460328 kB' 'Inactive: 1269620 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119892 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155728 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93596 kB' 'KernelStack: 6432 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.617 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.617 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.618 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.618 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.619 08:54:56 -- setup/common.sh@33 -- # echo 1024 00:04:19.619 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:19.619 08:54:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.619 08:54:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.619 08:54:56 -- setup/hugepages.sh@27 -- # local node 00:04:19.619 08:54:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.619 08:54:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.619 08:54:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.619 08:54:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.619 08:54:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.619 08:54:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.619 08:54:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.619 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.619 08:54:56 -- setup/common.sh@18 -- # local node=0 00:04:19.619 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:19.619 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.619 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.619 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.619 08:54:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.619 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.619 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8053516 kB' 'MemUsed: 4185600 kB' 'SwapCached: 0 kB' 'Active: 460316 kB' 'Inactive: 1269620 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1611628 kB' 'Mapped: 53352 kB' 'AnonPages: 119884 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62132 kB' 'Slab: 155724 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.619 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.619 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.878 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.878 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # continue 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.879 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.879 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.879 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:19.879 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:19.879 node0=1024 expecting 1024 00:04:19.879 08:54:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.879 08:54:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.879 08:54:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.879 08:54:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.879 08:54:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.879 08:54:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.879 08:54:56 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:19.879 08:54:56 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:19.879 08:54:56 -- setup/hugepages.sh@202 -- # setup output 00:04:19.879 08:54:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.879 08:54:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.140 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.140 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:20.140 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:20.140 08:54:56 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:20.140 08:54:56 -- setup/hugepages.sh@89 -- # local node 00:04:20.140 08:54:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.140 08:54:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.140 08:54:56 -- setup/hugepages.sh@92 -- # local surp 00:04:20.140 08:54:56 -- setup/hugepages.sh@93 -- # local resv 00:04:20.140 08:54:56 -- setup/hugepages.sh@94 -- # local anon 00:04:20.140 08:54:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.140 08:54:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.140 08:54:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.141 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:20.141 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:20.141 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.141 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.141 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.141 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.141 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.141 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8055904 kB' 'MemAvailable: 9451172 kB' 'Buffers: 3200 kB' 'Cached: 1608432 kB' 'SwapCached: 0 kB' 'Active: 461032 kB' 'Inactive: 1269624 kB' 'Active(anon): 129508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120620 kB' 'Mapped: 53468 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155760 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93628 kB' 'KernelStack: 6552 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.141 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.141 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.142 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:20.142 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:20.142 08:54:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.142 08:54:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.142 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.142 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:20.142 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:20.142 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.142 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.142 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.142 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.142 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.142 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8055656 kB' 'MemAvailable: 9450924 kB' 'Buffers: 3200 kB' 'Cached: 1608432 kB' 'SwapCached: 0 kB' 'Active: 460564 kB' 'Inactive: 1269624 kB' 'Active(anon): 129040 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120148 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155748 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93616 kB' 'KernelStack: 6464 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.142 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.142 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.143 08:54:56 -- setup/common.sh@33 -- # echo 0 00:04:20.143 08:54:56 -- setup/common.sh@33 -- # return 0 00:04:20.143 08:54:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.143 08:54:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.143 08:54:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.143 08:54:56 -- setup/common.sh@18 -- # local node= 00:04:20.143 08:54:56 -- setup/common.sh@19 -- # local var val 00:04:20.143 08:54:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.143 08:54:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.143 08:54:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.143 08:54:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.143 08:54:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.143 08:54:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8055656 kB' 'MemAvailable: 9450924 kB' 'Buffers: 3200 kB' 'Cached: 1608432 kB' 'SwapCached: 0 kB' 'Active: 460416 kB' 'Inactive: 1269624 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 53352 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6448 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.143 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.143 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.144 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.144 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.145 08:54:57 -- setup/common.sh@33 -- # echo 0 00:04:20.145 08:54:57 -- setup/common.sh@33 -- # return 0 00:04:20.145 nr_hugepages=1024 00:04:20.145 08:54:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:20.145 08:54:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.145 resv_hugepages=0 00:04:20.145 08:54:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.145 surplus_hugepages=0 00:04:20.145 anon_hugepages=0 00:04:20.145 08:54:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.145 08:54:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.145 08:54:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.145 08:54:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.145 08:54:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.145 08:54:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.145 08:54:57 -- setup/common.sh@18 -- # local node= 00:04:20.145 08:54:57 -- setup/common.sh@19 -- # local var val 00:04:20.145 08:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.145 08:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.145 08:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.145 08:54:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.145 08:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.145 08:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8055916 kB' 'MemAvailable: 9451184 kB' 'Buffers: 3200 kB' 'Cached: 1608432 kB' 'SwapCached: 0 kB' 'Active: 458776 kB' 'Inactive: 1269624 kB' 'Active(anon): 127252 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 52832 kB' 'Shmem: 10484 kB' 'KReclaimable: 62132 kB' 'Slab: 155740 kB' 'SReclaimable: 62132 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6432 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 310516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.145 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.145 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.146 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.146 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.146 08:54:57 -- setup/common.sh@33 -- # echo 1024 00:04:20.146 08:54:57 -- setup/common.sh@33 -- # return 0 00:04:20.146 08:54:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.146 08:54:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.146 08:54:57 -- setup/hugepages.sh@27 -- # local node 00:04:20.146 08:54:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.146 08:54:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.146 08:54:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.146 08:54:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.146 08:54:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.146 08:54:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.146 08:54:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.146 08:54:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.146 08:54:57 -- setup/common.sh@18 -- # local node=0 00:04:20.146 08:54:57 -- setup/common.sh@19 -- # local var val 00:04:20.146 08:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.146 08:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.146 08:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.146 08:54:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.146 08:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.146 08:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8056172 kB' 'MemUsed: 4182944 kB' 'SwapCached: 0 kB' 'Active: 457544 kB' 'Inactive: 1269624 kB' 'Active(anon): 126020 kB' 'Inactive(anon): 0 kB' 'Active(file): 331524 kB' 'Inactive(file): 1269624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1611632 kB' 'Mapped: 52504 kB' 'AnonPages: 117140 kB' 'Shmem: 10484 kB' 'KernelStack: 6320 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155600 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.406 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.406 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.407 08:54:57 -- setup/common.sh@32 -- # continue 00:04:20.407 08:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.407 08:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.407 08:54:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.407 08:54:57 -- setup/common.sh@33 -- # echo 0 00:04:20.407 08:54:57 -- setup/common.sh@33 -- # return 0 00:04:20.407 node0=1024 expecting 1024 00:04:20.407 08:54:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.407 08:54:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.407 08:54:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.407 08:54:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.407 08:54:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.407 08:54:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.407 00:04:20.407 real 0m1.140s 00:04:20.407 user 0m0.571s 00:04:20.407 sys 0m0.563s 00:04:20.407 08:54:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.407 ************************************ 00:04:20.407 END TEST no_shrink_alloc 00:04:20.407 ************************************ 00:04:20.407 08:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.407 08:54:57 -- setup/hugepages.sh@217 -- # clear_hp 00:04:20.407 08:54:57 -- setup/hugepages.sh@37 -- # local node hp 00:04:20.407 08:54:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.407 08:54:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.407 08:54:57 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.407 08:54:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.407 08:54:57 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.407 08:54:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.407 08:54:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.407 00:04:20.407 real 0m4.797s 00:04:20.407 user 0m2.360s 00:04:20.407 sys 0m2.451s 00:04:20.407 08:54:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.407 ************************************ 00:04:20.407 END TEST hugepages 00:04:20.407 ************************************ 00:04:20.407 08:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.407 08:54:57 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:20.407 08:54:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.407 08:54:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.407 08:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.407 ************************************ 00:04:20.407 START TEST driver 00:04:20.407 ************************************ 00:04:20.407 08:54:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:20.407 * Looking for test storage... 00:04:20.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.407 08:54:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:20.407 08:54:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:20.407 08:54:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:20.666 08:54:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:20.666 08:54:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:20.666 08:54:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:20.666 08:54:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:20.666 08:54:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:20.666 08:54:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:20.666 08:54:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.666 08:54:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:20.666 08:54:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:20.666 08:54:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:20.666 08:54:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:20.666 08:54:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:20.666 08:54:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:20.666 08:54:57 -- scripts/common.sh@344 -- # : 1 00:04:20.666 08:54:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:20.666 08:54:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.666 08:54:57 -- scripts/common.sh@364 -- # decimal 1 00:04:20.666 08:54:57 -- scripts/common.sh@352 -- # local d=1 00:04:20.666 08:54:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.666 08:54:57 -- scripts/common.sh@354 -- # echo 1 00:04:20.666 08:54:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:20.666 08:54:57 -- scripts/common.sh@365 -- # decimal 2 00:04:20.666 08:54:57 -- scripts/common.sh@352 -- # local d=2 00:04:20.666 08:54:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.666 08:54:57 -- scripts/common.sh@354 -- # echo 2 00:04:20.666 08:54:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:20.666 08:54:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:20.666 08:54:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:20.666 08:54:57 -- scripts/common.sh@367 -- # return 0 00:04:20.666 08:54:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.666 08:54:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.666 --rc genhtml_branch_coverage=1 00:04:20.666 --rc genhtml_function_coverage=1 00:04:20.666 --rc genhtml_legend=1 00:04:20.666 --rc geninfo_all_blocks=1 00:04:20.666 --rc geninfo_unexecuted_blocks=1 00:04:20.666 00:04:20.666 ' 00:04:20.666 08:54:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.666 --rc genhtml_branch_coverage=1 00:04:20.666 --rc genhtml_function_coverage=1 00:04:20.666 --rc genhtml_legend=1 00:04:20.666 --rc geninfo_all_blocks=1 00:04:20.666 --rc geninfo_unexecuted_blocks=1 00:04:20.666 00:04:20.666 ' 00:04:20.666 08:54:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.666 --rc genhtml_branch_coverage=1 00:04:20.666 --rc genhtml_function_coverage=1 00:04:20.666 --rc genhtml_legend=1 00:04:20.666 --rc geninfo_all_blocks=1 00:04:20.666 --rc geninfo_unexecuted_blocks=1 00:04:20.666 00:04:20.666 ' 00:04:20.666 08:54:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.666 --rc genhtml_branch_coverage=1 00:04:20.666 --rc genhtml_function_coverage=1 00:04:20.666 --rc genhtml_legend=1 00:04:20.666 --rc geninfo_all_blocks=1 00:04:20.666 --rc geninfo_unexecuted_blocks=1 00:04:20.666 00:04:20.666 ' 00:04:20.666 08:54:57 -- setup/driver.sh@68 -- # setup reset 00:04:20.666 08:54:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.666 08:54:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.234 08:54:57 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:21.234 08:54:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.234 08:54:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.234 08:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:21.234 ************************************ 00:04:21.234 START TEST guess_driver 00:04:21.234 ************************************ 00:04:21.234 08:54:57 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:21.234 08:54:57 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:21.234 08:54:57 -- setup/driver.sh@47 -- # local fail=0 00:04:21.234 08:54:57 -- setup/driver.sh@49 -- # pick_driver 00:04:21.234 08:54:57 -- setup/driver.sh@36 -- # vfio 00:04:21.234 08:54:57 -- setup/driver.sh@21 -- # local iommu_grups 00:04:21.234 08:54:57 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:21.234 08:54:57 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:21.234 08:54:57 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:21.234 08:54:57 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:21.234 08:54:57 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:21.234 08:54:57 -- setup/driver.sh@32 -- # return 1 00:04:21.234 08:54:57 -- setup/driver.sh@38 -- # uio 00:04:21.234 08:54:57 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:21.234 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:21.234 08:54:57 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:21.234 Looking for driver=uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:21.234 08:54:57 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:21.234 08:54:57 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:21.234 08:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.234 08:54:57 -- setup/driver.sh@45 -- # setup output config 00:04:21.234 08:54:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.234 08:54:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.801 08:54:58 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:21.801 08:54:58 -- setup/driver.sh@58 -- # continue 00:04:21.801 08:54:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.801 08:54:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.801 08:54:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:21.801 08:54:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.060 08:54:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.060 08:54:58 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:22.060 08:54:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.060 08:54:58 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:22.060 08:54:58 -- setup/driver.sh@65 -- # setup reset 00:04:22.060 08:54:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.060 08:54:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.626 00:04:22.626 real 0m1.396s 00:04:22.626 user 0m0.535s 00:04:22.626 sys 0m0.834s 00:04:22.626 08:54:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.626 ************************************ 00:04:22.626 END TEST guess_driver 00:04:22.626 ************************************ 00:04:22.626 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.626 ************************************ 00:04:22.626 END TEST driver 00:04:22.626 ************************************ 00:04:22.626 00:04:22.626 real 0m2.166s 00:04:22.626 user 0m0.829s 00:04:22.626 sys 0m1.372s 00:04:22.626 08:54:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.626 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.626 08:54:59 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.626 08:54:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.626 08:54:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.626 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.626 ************************************ 00:04:22.626 START TEST devices 00:04:22.626 ************************************ 00:04:22.626 08:54:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.626 * Looking for test storage... 00:04:22.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.626 08:54:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:22.626 08:54:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:22.626 08:54:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:22.885 08:54:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:22.885 08:54:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:22.885 08:54:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:22.885 08:54:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:22.885 08:54:59 -- scripts/common.sh@335 -- # IFS=.-: 00:04:22.885 08:54:59 -- scripts/common.sh@335 -- # read -ra ver1 00:04:22.885 08:54:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.885 08:54:59 -- scripts/common.sh@336 -- # read -ra ver2 00:04:22.885 08:54:59 -- scripts/common.sh@337 -- # local 'op=<' 00:04:22.885 08:54:59 -- scripts/common.sh@339 -- # ver1_l=2 00:04:22.885 08:54:59 -- scripts/common.sh@340 -- # ver2_l=1 00:04:22.885 08:54:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:22.885 08:54:59 -- scripts/common.sh@343 -- # case "$op" in 00:04:22.885 08:54:59 -- scripts/common.sh@344 -- # : 1 00:04:22.885 08:54:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:22.885 08:54:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.885 08:54:59 -- scripts/common.sh@364 -- # decimal 1 00:04:22.885 08:54:59 -- scripts/common.sh@352 -- # local d=1 00:04:22.885 08:54:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.885 08:54:59 -- scripts/common.sh@354 -- # echo 1 00:04:22.885 08:54:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:22.885 08:54:59 -- scripts/common.sh@365 -- # decimal 2 00:04:22.885 08:54:59 -- scripts/common.sh@352 -- # local d=2 00:04:22.885 08:54:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.885 08:54:59 -- scripts/common.sh@354 -- # echo 2 00:04:22.885 08:54:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:22.885 08:54:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:22.885 08:54:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:22.885 08:54:59 -- scripts/common.sh@367 -- # return 0 00:04:22.885 08:54:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.885 08:54:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:22.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.885 --rc genhtml_branch_coverage=1 00:04:22.885 --rc genhtml_function_coverage=1 00:04:22.885 --rc genhtml_legend=1 00:04:22.885 --rc geninfo_all_blocks=1 00:04:22.885 --rc geninfo_unexecuted_blocks=1 00:04:22.885 00:04:22.885 ' 00:04:22.885 08:54:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:22.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.885 --rc genhtml_branch_coverage=1 00:04:22.885 --rc genhtml_function_coverage=1 00:04:22.885 --rc genhtml_legend=1 00:04:22.885 --rc geninfo_all_blocks=1 00:04:22.885 --rc geninfo_unexecuted_blocks=1 00:04:22.885 00:04:22.885 ' 00:04:22.885 08:54:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:22.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.885 --rc genhtml_branch_coverage=1 00:04:22.885 --rc genhtml_function_coverage=1 00:04:22.885 --rc genhtml_legend=1 00:04:22.885 --rc geninfo_all_blocks=1 00:04:22.885 --rc geninfo_unexecuted_blocks=1 00:04:22.885 00:04:22.885 ' 00:04:22.885 08:54:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:22.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.885 --rc genhtml_branch_coverage=1 00:04:22.885 --rc genhtml_function_coverage=1 00:04:22.885 --rc genhtml_legend=1 00:04:22.885 --rc geninfo_all_blocks=1 00:04:22.885 --rc geninfo_unexecuted_blocks=1 00:04:22.885 00:04:22.885 ' 00:04:22.885 08:54:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.885 08:54:59 -- setup/devices.sh@192 -- # setup reset 00:04:22.885 08:54:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.885 08:54:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.453 08:55:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.453 08:55:00 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:23.453 08:55:00 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:23.453 08:55:00 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:23.453 08:55:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.453 08:55:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:23.453 08:55:00 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:23.453 08:55:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.453 08:55:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:23.453 08:55:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:23.453 08:55:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.453 08:55:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:23.453 08:55:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:23.453 08:55:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.453 08:55:00 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:23.453 08:55:00 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:23.453 08:55:00 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:23.453 08:55:00 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.453 08:55:00 -- setup/devices.sh@196 -- # blocks=() 00:04:23.453 08:55:00 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.453 08:55:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.453 08:55:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.453 08:55:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.453 08:55:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.453 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.453 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.453 08:55:00 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:23.453 08:55:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:23.453 08:55:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.453 08:55:00 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:23.453 08:55:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.453 No valid GPT data, bailing 00:04:23.453 08:55:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # pt= 00:04:23.713 08:55:00 -- scripts/common.sh@394 -- # return 1 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.713 08:55:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.713 08:55:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.713 08:55:00 -- setup/common.sh@80 -- # echo 5368709120 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:23.713 08:55:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.713 08:55:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:23.713 08:55:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.713 08:55:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:23.713 08:55:00 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:23.713 08:55:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:23.713 No valid GPT data, bailing 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # pt= 00:04:23.713 08:55:00 -- scripts/common.sh@394 -- # return 1 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:23.713 08:55:00 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:23.713 08:55:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:23.713 08:55:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.713 08:55:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.713 08:55:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.713 08:55:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:23.713 08:55:00 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:23.713 08:55:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:23.713 No valid GPT data, bailing 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # pt= 00:04:23.713 08:55:00 -- scripts/common.sh@394 -- # return 1 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:23.713 08:55:00 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:23.713 08:55:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:23.713 08:55:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.713 08:55:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.713 08:55:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:23.713 08:55:00 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.713 08:55:00 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:23.713 08:55:00 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:23.713 08:55:00 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:23.713 No valid GPT data, bailing 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:23.713 08:55:00 -- scripts/common.sh@393 -- # pt= 00:04:23.713 08:55:00 -- scripts/common.sh@394 -- # return 1 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:23.713 08:55:00 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:23.713 08:55:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:23.713 08:55:00 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.713 08:55:00 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.713 08:55:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.713 08:55:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.713 08:55:00 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:23.713 08:55:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.713 08:55:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.713 08:55:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.713 08:55:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.713 08:55:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.713 ************************************ 00:04:23.713 START TEST nvme_mount 00:04:23.713 ************************************ 00:04:23.713 08:55:00 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:23.713 08:55:00 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.713 08:55:00 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.713 08:55:00 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.713 08:55:00 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.713 08:55:00 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.713 08:55:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.713 08:55:00 -- setup/common.sh@40 -- # local part_no=1 00:04:23.713 08:55:00 -- setup/common.sh@41 -- # local size=1073741824 00:04:23.713 08:55:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.713 08:55:00 -- setup/common.sh@44 -- # parts=() 00:04:23.713 08:55:00 -- setup/common.sh@44 -- # local parts 00:04:23.713 08:55:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.713 08:55:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.713 08:55:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.713 08:55:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.713 08:55:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.713 08:55:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:23.713 08:55:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.713 08:55:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.090 Creating new GPT entries in memory. 00:04:25.090 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.090 other utilities. 00:04:25.090 08:55:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.090 08:55:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.090 08:55:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.090 08:55:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.090 08:55:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:26.027 Creating new GPT entries in memory. 00:04:26.027 The operation has completed successfully. 00:04:26.027 08:55:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.027 08:55:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.027 08:55:02 -- setup/common.sh@62 -- # wait 52105 00:04:26.027 08:55:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.027 08:55:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:26.027 08:55:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.027 08:55:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.027 08:55:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.027 08:55:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.027 08:55:02 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.027 08:55:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:26.027 08:55:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.027 08:55:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.027 08:55:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.027 08:55:02 -- setup/devices.sh@53 -- # local found=0 00:04:26.027 08:55:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.027 08:55:02 -- setup/devices.sh@56 -- # : 00:04:26.027 08:55:02 -- setup/devices.sh@59 -- # local pci status 00:04:26.027 08:55:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.027 08:55:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:26.027 08:55:02 -- setup/devices.sh@47 -- # setup output config 00:04:26.027 08:55:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.027 08:55:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.027 08:55:02 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.027 08:55:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:26.027 08:55:02 -- setup/devices.sh@63 -- # found=1 00:04:26.027 08:55:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.027 08:55:02 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.027 08:55:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.596 08:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.596 08:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.596 08:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.596 08:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.596 08:55:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.596 08:55:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:26.596 08:55:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.596 08:55:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.596 08:55:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.596 08:55:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:26.596 08:55:03 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.596 08:55:03 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.596 08:55:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.596 08:55:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.596 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.596 08:55:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.596 08:55:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.855 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.855 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.855 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.855 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.855 08:55:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:26.855 08:55:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:26.855 08:55:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.855 08:55:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:26.855 08:55:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:26.855 08:55:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.855 08:55:03 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.855 08:55:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:26.855 08:55:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:26.855 08:55:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.855 08:55:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.855 08:55:03 -- setup/devices.sh@53 -- # local found=0 00:04:26.855 08:55:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.855 08:55:03 -- setup/devices.sh@56 -- # : 00:04:26.855 08:55:03 -- setup/devices.sh@59 -- # local pci status 00:04:26.855 08:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.855 08:55:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:26.855 08:55:03 -- setup/devices.sh@47 -- # setup output config 00:04:26.855 08:55:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.855 08:55:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.114 08:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.114 08:55:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:27.114 08:55:03 -- setup/devices.sh@63 -- # found=1 00:04:27.114 08:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.114 08:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.114 08:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.373 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.373 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.373 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.373 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.640 08:55:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.640 08:55:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.640 08:55:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.640 08:55:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.640 08:55:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.640 08:55:04 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.640 08:55:04 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:27.640 08:55:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:27.640 08:55:04 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:27.640 08:55:04 -- setup/devices.sh@50 -- # local mount_point= 00:04:27.640 08:55:04 -- setup/devices.sh@51 -- # local test_file= 00:04:27.640 08:55:04 -- setup/devices.sh@53 -- # local found=0 00:04:27.640 08:55:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.640 08:55:04 -- setup/devices.sh@59 -- # local pci status 00:04:27.640 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.640 08:55:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:27.640 08:55:04 -- setup/devices.sh@47 -- # setup output config 00:04:27.640 08:55:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.640 08:55:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.900 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.900 08:55:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.900 08:55:04 -- setup/devices.sh@63 -- # found=1 00:04:27.900 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.900 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.900 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.160 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.160 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.160 08:55:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.160 08:55:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.160 08:55:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.160 08:55:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.160 08:55:05 -- setup/devices.sh@68 -- # return 0 00:04:28.160 08:55:05 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:28.160 08:55:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.160 08:55:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.160 08:55:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.160 08:55:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.160 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.160 00:04:28.160 real 0m4.401s 00:04:28.160 user 0m0.994s 00:04:28.160 sys 0m1.080s 00:04:28.160 08:55:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.160 08:55:05 -- common/autotest_common.sh@10 -- # set +x 00:04:28.160 ************************************ 00:04:28.160 END TEST nvme_mount 00:04:28.160 ************************************ 00:04:28.160 08:55:05 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:28.160 08:55:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.160 08:55:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.160 08:55:05 -- common/autotest_common.sh@10 -- # set +x 00:04:28.160 ************************************ 00:04:28.160 START TEST dm_mount 00:04:28.160 ************************************ 00:04:28.160 08:55:05 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:28.160 08:55:05 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:28.160 08:55:05 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:28.160 08:55:05 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:28.160 08:55:05 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:28.160 08:55:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.160 08:55:05 -- setup/common.sh@40 -- # local part_no=2 00:04:28.160 08:55:05 -- setup/common.sh@41 -- # local size=1073741824 00:04:28.160 08:55:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.160 08:55:05 -- setup/common.sh@44 -- # parts=() 00:04:28.160 08:55:05 -- setup/common.sh@44 -- # local parts 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.160 08:55:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.160 08:55:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.160 08:55:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.160 08:55:05 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:28.160 08:55:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.160 08:55:05 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:29.538 Creating new GPT entries in memory. 00:04:29.538 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.538 other utilities. 00:04:29.538 08:55:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.538 08:55:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.538 08:55:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.538 08:55:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.538 08:55:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:30.474 Creating new GPT entries in memory. 00:04:30.474 The operation has completed successfully. 00:04:30.474 08:55:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:30.474 08:55:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.474 08:55:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.474 08:55:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.474 08:55:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:31.410 The operation has completed successfully. 00:04:31.410 08:55:08 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.410 08:55:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.410 08:55:08 -- setup/common.sh@62 -- # wait 52564 00:04:31.410 08:55:08 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.410 08:55:08 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.410 08:55:08 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.410 08:55:08 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.410 08:55:08 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.410 08:55:08 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.410 08:55:08 -- setup/devices.sh@161 -- # break 00:04:31.410 08:55:08 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.410 08:55:08 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.410 08:55:08 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.410 08:55:08 -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.410 08:55:08 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.410 08:55:08 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.410 08:55:08 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.410 08:55:08 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:31.410 08:55:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.410 08:55:08 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.410 08:55:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.410 08:55:08 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.410 08:55:08 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.410 08:55:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.410 08:55:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.410 08:55:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.410 08:55:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.410 08:55:08 -- setup/devices.sh@53 -- # local found=0 00:04:31.410 08:55:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.410 08:55:08 -- setup/devices.sh@56 -- # : 00:04:31.410 08:55:08 -- setup/devices.sh@59 -- # local pci status 00:04:31.410 08:55:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.410 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.410 08:55:08 -- setup/devices.sh@47 -- # setup output config 00:04:31.410 08:55:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.410 08:55:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.669 08:55:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.669 08:55:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.670 08:55:08 -- setup/devices.sh@63 -- # found=1 00:04:31.670 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 08:55:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.670 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.929 08:55:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.929 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.929 08:55:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.929 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.188 08:55:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.188 08:55:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:32.188 08:55:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.188 08:55:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.188 08:55:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.188 08:55:08 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.188 08:55:08 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:32.188 08:55:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:32.188 08:55:08 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:32.188 08:55:08 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.188 08:55:08 -- setup/devices.sh@51 -- # local test_file= 00:04:32.188 08:55:08 -- setup/devices.sh@53 -- # local found=0 00:04:32.188 08:55:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.188 08:55:08 -- setup/devices.sh@59 -- # local pci status 00:04:32.188 08:55:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.188 08:55:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:32.188 08:55:08 -- setup/devices.sh@47 -- # setup output config 00:04:32.188 08:55:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.188 08:55:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.188 08:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.188 08:55:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:32.188 08:55:09 -- setup/devices.sh@63 -- # found=1 00:04:32.188 08:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.188 08:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.188 08:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.758 08:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.758 08:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.759 08:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.759 08:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.759 08:55:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.759 08:55:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.759 08:55:09 -- setup/devices.sh@68 -- # return 0 00:04:32.759 08:55:09 -- setup/devices.sh@187 -- # cleanup_dm 00:04:32.759 08:55:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.759 08:55:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.759 08:55:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:32.759 08:55:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.759 08:55:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:32.759 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.759 08:55:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.759 08:55:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:32.759 00:04:32.759 real 0m4.490s 00:04:32.759 user 0m0.635s 00:04:32.759 sys 0m0.803s 00:04:32.759 08:55:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.759 08:55:09 -- common/autotest_common.sh@10 -- # set +x 00:04:32.759 ************************************ 00:04:32.759 END TEST dm_mount 00:04:32.759 ************************************ 00:04:32.759 08:55:09 -- setup/devices.sh@1 -- # cleanup 00:04:32.759 08:55:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:32.759 08:55:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.759 08:55:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.759 08:55:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.759 08:55:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.759 08:55:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.018 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.018 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.018 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.018 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.018 08:55:09 -- setup/devices.sh@12 -- # cleanup_dm 00:04:33.018 08:55:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.018 08:55:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.018 08:55:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.018 08:55:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.018 08:55:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.018 08:55:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:33.018 00:04:33.018 real 0m10.505s 00:04:33.018 user 0m2.381s 00:04:33.018 sys 0m2.452s 00:04:33.018 08:55:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.018 ************************************ 00:04:33.018 END TEST devices 00:04:33.018 ************************************ 00:04:33.018 08:55:09 -- common/autotest_common.sh@10 -- # set +x 00:04:33.018 00:04:33.018 real 0m22.133s 00:04:33.018 user 0m7.702s 00:04:33.018 sys 0m8.789s 00:04:33.018 08:55:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.018 ************************************ 00:04:33.018 END TEST setup.sh 00:04:33.018 08:55:09 -- common/autotest_common.sh@10 -- # set +x 00:04:33.018 ************************************ 00:04:33.278 08:55:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:33.278 Hugepages 00:04:33.278 node hugesize free / total 00:04:33.278 node0 1048576kB 0 / 0 00:04:33.278 node0 2048kB 2048 / 2048 00:04:33.278 00:04:33.278 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.278 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:33.537 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:33.537 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:33.537 08:55:10 -- spdk/autotest.sh@128 -- # uname -s 00:04:33.537 08:55:10 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:33.537 08:55:10 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:33.537 08:55:10 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.363 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.363 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.363 08:55:11 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:35.300 08:55:12 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:35.300 08:55:12 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:35.300 08:55:12 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.300 08:55:12 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:35.300 08:55:12 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:35.300 08:55:12 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:35.300 08:55:12 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.300 08:55:12 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.300 08:55:12 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:35.559 08:55:12 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:35.559 08:55:12 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:35.559 08:55:12 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.818 Waiting for block devices as requested 00:04:35.818 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.818 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.078 08:55:12 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:36.078 08:55:12 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:36.078 08:55:12 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:36.078 08:55:12 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:36.078 08:55:12 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1552 -- # continue 00:04:36.078 08:55:12 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:36.078 08:55:12 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:36.078 08:55:12 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:36.078 08:55:12 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:36.078 08:55:12 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:36.078 08:55:12 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:36.078 08:55:12 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:36.078 08:55:12 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:36.078 08:55:12 -- common/autotest_common.sh@1552 -- # continue 00:04:36.078 08:55:12 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:36.078 08:55:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.078 08:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:36.078 08:55:12 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:36.078 08:55:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.078 08:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:36.078 08:55:12 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.905 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.905 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.905 08:55:13 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:36.905 08:55:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.905 08:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:36.905 08:55:13 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:36.905 08:55:13 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:36.905 08:55:13 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.905 08:55:13 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:36.905 08:55:13 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:36.905 08:55:13 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:36.905 08:55:13 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:36.905 08:55:13 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:36.905 08:55:13 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.905 08:55:13 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.905 08:55:13 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:37.164 08:55:13 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:37.164 08:55:13 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:37.164 08:55:13 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:37.164 08:55:13 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:37.164 08:55:13 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:37.164 08:55:13 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.164 08:55:13 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:37.164 08:55:13 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:37.164 08:55:13 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:37.164 08:55:13 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.164 08:55:13 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:37.164 08:55:13 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:37.164 08:55:13 -- common/autotest_common.sh@1588 -- # return 0 00:04:37.164 08:55:13 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:37.164 08:55:13 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:37.164 08:55:13 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:37.164 08:55:13 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:37.164 08:55:13 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:37.164 08:55:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.164 08:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:37.164 08:55:13 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.164 08:55:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.164 08:55:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.164 08:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:37.164 ************************************ 00:04:37.164 START TEST env 00:04:37.164 ************************************ 00:04:37.164 08:55:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.164 * Looking for test storage... 00:04:37.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:37.164 08:55:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.164 08:55:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.164 08:55:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.164 08:55:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.164 08:55:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.164 08:55:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.164 08:55:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.164 08:55:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.164 08:55:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.164 08:55:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.164 08:55:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.164 08:55:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.164 08:55:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.164 08:55:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.164 08:55:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.165 08:55:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.165 08:55:14 -- scripts/common.sh@344 -- # : 1 00:04:37.165 08:55:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.165 08:55:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.165 08:55:14 -- scripts/common.sh@364 -- # decimal 1 00:04:37.165 08:55:14 -- scripts/common.sh@352 -- # local d=1 00:04:37.165 08:55:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.165 08:55:14 -- scripts/common.sh@354 -- # echo 1 00:04:37.165 08:55:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.165 08:55:14 -- scripts/common.sh@365 -- # decimal 2 00:04:37.165 08:55:14 -- scripts/common.sh@352 -- # local d=2 00:04:37.165 08:55:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.165 08:55:14 -- scripts/common.sh@354 -- # echo 2 00:04:37.165 08:55:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.165 08:55:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.165 08:55:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.165 08:55:14 -- scripts/common.sh@367 -- # return 0 00:04:37.165 08:55:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.165 08:55:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.165 --rc genhtml_branch_coverage=1 00:04:37.165 --rc genhtml_function_coverage=1 00:04:37.165 --rc genhtml_legend=1 00:04:37.165 --rc geninfo_all_blocks=1 00:04:37.165 --rc geninfo_unexecuted_blocks=1 00:04:37.165 00:04:37.165 ' 00:04:37.165 08:55:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.165 --rc genhtml_branch_coverage=1 00:04:37.165 --rc genhtml_function_coverage=1 00:04:37.165 --rc genhtml_legend=1 00:04:37.165 --rc geninfo_all_blocks=1 00:04:37.165 --rc geninfo_unexecuted_blocks=1 00:04:37.165 00:04:37.165 ' 00:04:37.165 08:55:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.165 --rc genhtml_branch_coverage=1 00:04:37.165 --rc genhtml_function_coverage=1 00:04:37.165 --rc genhtml_legend=1 00:04:37.165 --rc geninfo_all_blocks=1 00:04:37.165 --rc geninfo_unexecuted_blocks=1 00:04:37.165 00:04:37.165 ' 00:04:37.165 08:55:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.165 --rc genhtml_branch_coverage=1 00:04:37.165 --rc genhtml_function_coverage=1 00:04:37.165 --rc genhtml_legend=1 00:04:37.165 --rc geninfo_all_blocks=1 00:04:37.165 --rc geninfo_unexecuted_blocks=1 00:04:37.165 00:04:37.165 ' 00:04:37.165 08:55:14 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.165 08:55:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.165 08:55:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.165 08:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.165 ************************************ 00:04:37.165 START TEST env_memory 00:04:37.165 ************************************ 00:04:37.165 08:55:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.165 00:04:37.165 00:04:37.165 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.165 http://cunit.sourceforge.net/ 00:04:37.165 00:04:37.165 00:04:37.165 Suite: memory 00:04:37.422 Test: alloc and free memory map ...[2024-11-17 08:55:14.107286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:37.422 passed 00:04:37.422 Test: mem map translation ...[2024-11-17 08:55:14.137902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:37.422 [2024-11-17 08:55:14.137939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:37.422 [2024-11-17 08:55:14.137993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:37.422 [2024-11-17 08:55:14.138003] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:37.422 passed 00:04:37.422 Test: mem map registration ...[2024-11-17 08:55:14.201783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:37.422 [2024-11-17 08:55:14.201825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:37.422 passed 00:04:37.422 Test: mem map adjacent registrations ...passed 00:04:37.422 00:04:37.422 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.422 suites 1 1 n/a 0 0 00:04:37.422 tests 4 4 4 0 0 00:04:37.423 asserts 152 152 152 0 n/a 00:04:37.423 00:04:37.423 Elapsed time = 0.217 seconds 00:04:37.423 00:04:37.423 real 0m0.236s 00:04:37.423 user 0m0.219s 00:04:37.423 sys 0m0.013s 00:04:37.423 08:55:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.423 08:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.423 ************************************ 00:04:37.423 END TEST env_memory 00:04:37.423 ************************************ 00:04:37.423 08:55:14 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:37.423 08:55:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.423 08:55:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.423 08:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.423 ************************************ 00:04:37.423 START TEST env_vtophys 00:04:37.423 ************************************ 00:04:37.423 08:55:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:37.696 EAL: lib.eal log level changed from notice to debug 00:04:37.696 EAL: Detected lcore 0 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 1 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 2 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 3 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 4 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 5 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 6 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 7 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 8 as core 0 on socket 0 00:04:37.696 EAL: Detected lcore 9 as core 0 on socket 0 00:04:37.696 EAL: Maximum logical cores by configuration: 128 00:04:37.696 EAL: Detected CPU lcores: 10 00:04:37.696 EAL: Detected NUMA nodes: 1 00:04:37.696 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:37.696 EAL: Detected shared linkage of DPDK 00:04:37.696 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.696 EAL: Selected IOVA mode 'PA' 00:04:37.696 EAL: Probing VFIO support... 00:04:37.696 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.696 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:37.696 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.696 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.696 EAL: Setting up physically contiguous memory... 00:04:37.696 EAL: Setting maximum number of open files to 524288 00:04:37.696 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.696 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.696 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.696 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.696 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.696 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.696 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.696 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.696 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.696 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.696 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.696 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.696 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.696 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.696 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.696 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.696 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.696 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.696 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.696 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.696 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.696 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.696 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.696 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.696 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.696 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.696 EAL: Hugepages will be freed exactly as allocated. 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: TSC frequency is ~2200000 KHz 00:04:37.696 EAL: Main lcore 0 is ready (tid=7ffb0d65ba00;cpuset=[0]) 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 0 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.696 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.696 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.696 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.696 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:37.696 00:04:37.696 00:04:37.696 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.696 http://cunit.sourceforge.net/ 00:04:37.696 00:04:37.696 00:04:37.696 Suite: components_suite 00:04:37.696 Test: vtophys_malloc_test ...passed 00:04:37.696 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.696 EAL: No shared files mode enabled, IPC is disabled 00:04:37.696 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.696 EAL: Trying to obtain current memory policy. 00:04:37.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.696 EAL: Restoring previous memory policy: 4 00:04:37.696 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.696 EAL: request: mp_malloc_sync 00:04:37.697 EAL: No shared files mode enabled, IPC is disabled 00:04:37.697 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.697 EAL: request: mp_malloc_sync 00:04:37.697 EAL: No shared files mode enabled, IPC is disabled 00:04:37.697 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.697 EAL: Trying to obtain current memory policy. 00:04:37.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.697 EAL: Restoring previous memory policy: 4 00:04:37.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.697 EAL: request: mp_malloc_sync 00:04:37.697 EAL: No shared files mode enabled, IPC is disabled 00:04:37.697 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.697 EAL: request: mp_malloc_sync 00:04:37.697 EAL: No shared files mode enabled, IPC is disabled 00:04:37.697 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.697 EAL: Trying to obtain current memory policy. 00:04:37.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.955 EAL: Restoring previous memory policy: 4 00:04:37.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.955 EAL: request: mp_malloc_sync 00:04:37.955 EAL: No shared files mode enabled, IPC is disabled 00:04:37.955 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.955 EAL: request: mp_malloc_sync 00:04:37.955 EAL: No shared files mode enabled, IPC is disabled 00:04:37.955 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.955 EAL: Trying to obtain current memory policy. 00:04:37.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.955 EAL: Restoring previous memory policy: 4 00:04:37.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.955 EAL: request: mp_malloc_sync 00:04:37.955 EAL: No shared files mode enabled, IPC is disabled 00:04:37.955 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.955 EAL: request: mp_malloc_sync 00:04:37.955 EAL: No shared files mode enabled, IPC is disabled 00:04:37.955 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.955 EAL: Trying to obtain current memory policy. 00:04:37.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.214 EAL: Restoring previous memory policy: 4 00:04:38.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.214 EAL: request: mp_malloc_sync 00:04:38.214 EAL: No shared files mode enabled, IPC is disabled 00:04:38.214 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.473 passed 00:04:38.473 00:04:38.473 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.473 suites 1 1 n/a 0 0 00:04:38.473 tests 2 2 2 0 0 00:04:38.473 asserts 5288 5288 5288 0 n/a 00:04:38.473 00:04:38.473 Elapsed time = 0.673 seconds 00:04:38.473 EAL: request: mp_malloc_sync 00:04:38.473 EAL: No shared files mode enabled, IPC is disabled 00:04:38.473 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.473 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.473 EAL: request: mp_malloc_sync 00:04:38.473 EAL: No shared files mode enabled, IPC is disabled 00:04:38.473 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.473 EAL: No shared files mode enabled, IPC is disabled 00:04:38.473 EAL: No shared files mode enabled, IPC is disabled 00:04:38.473 EAL: No shared files mode enabled, IPC is disabled 00:04:38.473 00:04:38.473 real 0m0.865s 00:04:38.473 user 0m0.448s 00:04:38.473 sys 0m0.291s 00:04:38.473 08:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.473 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.473 ************************************ 00:04:38.473 END TEST env_vtophys 00:04:38.473 ************************************ 00:04:38.473 08:55:15 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.473 08:55:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.473 08:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.473 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.473 ************************************ 00:04:38.473 START TEST env_pci 00:04:38.473 ************************************ 00:04:38.473 08:55:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.473 00:04:38.473 00:04:38.473 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.473 http://cunit.sourceforge.net/ 00:04:38.473 00:04:38.473 00:04:38.473 Suite: pci 00:04:38.473 Test: pci_hook ...[2024-11-17 08:55:15.271167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53697 has claimed it 00:04:38.473 passed 00:04:38.473 00:04:38.473 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.473 suites 1 1 n/a 0 0 00:04:38.473 tests 1 1 1 0 0 00:04:38.473 asserts 25 25 25 0 n/a 00:04:38.473 00:04:38.473 Elapsed time = 0.003 seconds 00:04:38.473 EAL: Cannot find device (10000:00:01.0) 00:04:38.473 EAL: Failed to attach device on primary process 00:04:38.473 00:04:38.473 real 0m0.018s 00:04:38.473 user 0m0.009s 00:04:38.473 sys 0m0.008s 00:04:38.473 08:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.473 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.473 ************************************ 00:04:38.473 END TEST env_pci 00:04:38.473 ************************************ 00:04:38.473 08:55:15 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.473 08:55:15 -- env/env.sh@15 -- # uname 00:04:38.473 08:55:15 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.473 08:55:15 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.473 08:55:15 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.473 08:55:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:38.473 08:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.473 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.473 ************************************ 00:04:38.473 START TEST env_dpdk_post_init 00:04:38.473 ************************************ 00:04:38.473 08:55:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.473 EAL: Detected CPU lcores: 10 00:04:38.473 EAL: Detected NUMA nodes: 1 00:04:38.473 EAL: Detected shared linkage of DPDK 00:04:38.473 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.473 EAL: Selected IOVA mode 'PA' 00:04:38.733 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.733 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:38.733 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:38.733 Starting DPDK initialization... 00:04:38.733 Starting SPDK post initialization... 00:04:38.733 SPDK NVMe probe 00:04:38.733 Attaching to 0000:00:06.0 00:04:38.733 Attaching to 0000:00:07.0 00:04:38.733 Attached to 0000:00:06.0 00:04:38.733 Attached to 0000:00:07.0 00:04:38.733 Cleaning up... 00:04:38.733 00:04:38.733 real 0m0.164s 00:04:38.733 user 0m0.038s 00:04:38.733 sys 0m0.027s 00:04:38.733 08:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.733 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.733 ************************************ 00:04:38.733 END TEST env_dpdk_post_init 00:04:38.733 ************************************ 00:04:38.733 08:55:15 -- env/env.sh@26 -- # uname 00:04:38.733 08:55:15 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.733 08:55:15 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.733 08:55:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.733 08:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.733 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.733 ************************************ 00:04:38.733 START TEST env_mem_callbacks 00:04:38.733 ************************************ 00:04:38.733 08:55:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.733 EAL: Detected CPU lcores: 10 00:04:38.733 EAL: Detected NUMA nodes: 1 00:04:38.733 EAL: Detected shared linkage of DPDK 00:04:38.733 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.733 EAL: Selected IOVA mode 'PA' 00:04:38.993 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.993 00:04:38.993 00:04:38.993 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.993 http://cunit.sourceforge.net/ 00:04:38.993 00:04:38.993 00:04:38.993 Suite: memory 00:04:38.993 Test: test ... 00:04:38.993 register 0x200000200000 2097152 00:04:38.993 malloc 3145728 00:04:38.993 register 0x200000400000 4194304 00:04:38.993 buf 0x200000500000 len 3145728 PASSED 00:04:38.993 malloc 64 00:04:38.993 buf 0x2000004fff40 len 64 PASSED 00:04:38.993 malloc 4194304 00:04:38.993 register 0x200000800000 6291456 00:04:38.993 buf 0x200000a00000 len 4194304 PASSED 00:04:38.993 free 0x200000500000 3145728 00:04:38.993 free 0x2000004fff40 64 00:04:38.993 unregister 0x200000400000 4194304 PASSED 00:04:38.993 free 0x200000a00000 4194304 00:04:38.993 unregister 0x200000800000 6291456 PASSED 00:04:38.993 malloc 8388608 00:04:38.993 register 0x200000400000 10485760 00:04:38.993 buf 0x200000600000 len 8388608 PASSED 00:04:38.993 free 0x200000600000 8388608 00:04:38.993 unregister 0x200000400000 10485760 PASSED 00:04:38.993 passed 00:04:38.993 00:04:38.993 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.993 suites 1 1 n/a 0 0 00:04:38.993 tests 1 1 1 0 0 00:04:38.993 asserts 15 15 15 0 n/a 00:04:38.993 00:04:38.993 Elapsed time = 0.008 seconds 00:04:38.993 00:04:38.993 real 0m0.139s 00:04:38.993 user 0m0.015s 00:04:38.993 sys 0m0.022s 00:04:38.993 08:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.993 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 ************************************ 00:04:38.993 END TEST env_mem_callbacks 00:04:38.993 ************************************ 00:04:38.993 00:04:38.993 real 0m1.861s 00:04:38.993 user 0m0.911s 00:04:38.993 sys 0m0.610s 00:04:38.993 08:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.993 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 ************************************ 00:04:38.993 END TEST env 00:04:38.993 ************************************ 00:04:38.993 08:55:15 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.993 08:55:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.993 08:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.993 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 ************************************ 00:04:38.993 START TEST rpc 00:04:38.993 ************************************ 00:04:38.993 08:55:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.993 * Looking for test storage... 00:04:38.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.993 08:55:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.993 08:55:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.993 08:55:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.253 08:55:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.253 08:55:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.253 08:55:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.253 08:55:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.253 08:55:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.253 08:55:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.253 08:55:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.253 08:55:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.253 08:55:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.253 08:55:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.253 08:55:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.253 08:55:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.253 08:55:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.253 08:55:15 -- scripts/common.sh@344 -- # : 1 00:04:39.253 08:55:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.253 08:55:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.253 08:55:15 -- scripts/common.sh@364 -- # decimal 1 00:04:39.253 08:55:15 -- scripts/common.sh@352 -- # local d=1 00:04:39.253 08:55:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.253 08:55:15 -- scripts/common.sh@354 -- # echo 1 00:04:39.253 08:55:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.253 08:55:15 -- scripts/common.sh@365 -- # decimal 2 00:04:39.253 08:55:15 -- scripts/common.sh@352 -- # local d=2 00:04:39.253 08:55:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.253 08:55:15 -- scripts/common.sh@354 -- # echo 2 00:04:39.253 08:55:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.253 08:55:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.253 08:55:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.253 08:55:15 -- scripts/common.sh@367 -- # return 0 00:04:39.253 08:55:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.253 08:55:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.253 --rc genhtml_branch_coverage=1 00:04:39.253 --rc genhtml_function_coverage=1 00:04:39.253 --rc genhtml_legend=1 00:04:39.253 --rc geninfo_all_blocks=1 00:04:39.253 --rc geninfo_unexecuted_blocks=1 00:04:39.253 00:04:39.253 ' 00:04:39.253 08:55:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.253 --rc genhtml_branch_coverage=1 00:04:39.253 --rc genhtml_function_coverage=1 00:04:39.253 --rc genhtml_legend=1 00:04:39.253 --rc geninfo_all_blocks=1 00:04:39.253 --rc geninfo_unexecuted_blocks=1 00:04:39.253 00:04:39.253 ' 00:04:39.253 08:55:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.253 --rc genhtml_branch_coverage=1 00:04:39.253 --rc genhtml_function_coverage=1 00:04:39.253 --rc genhtml_legend=1 00:04:39.253 --rc geninfo_all_blocks=1 00:04:39.253 --rc geninfo_unexecuted_blocks=1 00:04:39.253 00:04:39.253 ' 00:04:39.253 08:55:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.253 --rc genhtml_branch_coverage=1 00:04:39.253 --rc genhtml_function_coverage=1 00:04:39.253 --rc genhtml_legend=1 00:04:39.253 --rc geninfo_all_blocks=1 00:04:39.253 --rc geninfo_unexecuted_blocks=1 00:04:39.253 00:04:39.253 ' 00:04:39.253 08:55:15 -- rpc/rpc.sh@65 -- # spdk_pid=53814 00:04:39.253 08:55:15 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.253 08:55:15 -- rpc/rpc.sh@67 -- # waitforlisten 53814 00:04:39.253 08:55:15 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.254 08:55:15 -- common/autotest_common.sh@829 -- # '[' -z 53814 ']' 00:04:39.254 08:55:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.254 08:55:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.254 08:55:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.254 08:55:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.254 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:39.254 [2024-11-17 08:55:16.032868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:39.254 [2024-11-17 08:55:16.032986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53814 ] 00:04:39.254 [2024-11-17 08:55:16.176523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.513 [2024-11-17 08:55:16.244973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.513 [2024-11-17 08:55:16.245150] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.513 [2024-11-17 08:55:16.245166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53814' to capture a snapshot of events at runtime. 00:04:39.513 [2024-11-17 08:55:16.245177] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53814 for offline analysis/debug. 00:04:39.513 [2024-11-17 08:55:16.245212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.450 08:55:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.450 08:55:17 -- common/autotest_common.sh@862 -- # return 0 00:04:40.450 08:55:17 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.450 08:55:17 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.450 08:55:17 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.450 08:55:17 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.450 08:55:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.450 08:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.450 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.450 ************************************ 00:04:40.450 START TEST rpc_integrity 00:04:40.450 ************************************ 00:04:40.450 08:55:17 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:40.450 08:55:17 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.450 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.450 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.450 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.450 08:55:17 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.450 08:55:17 -- rpc/rpc.sh@13 -- # jq length 00:04:40.450 08:55:17 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.450 08:55:17 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.450 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.450 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.450 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.450 08:55:17 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.450 08:55:17 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.450 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.450 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.450 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.450 08:55:17 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.450 { 00:04:40.450 "name": "Malloc0", 00:04:40.450 "aliases": [ 00:04:40.450 "a5fa7c5c-0a48-4245-bb36-2580385fc81f" 00:04:40.450 ], 00:04:40.450 "product_name": "Malloc disk", 00:04:40.450 "block_size": 512, 00:04:40.450 "num_blocks": 16384, 00:04:40.450 "uuid": "a5fa7c5c-0a48-4245-bb36-2580385fc81f", 00:04:40.450 "assigned_rate_limits": { 00:04:40.450 "rw_ios_per_sec": 0, 00:04:40.450 "rw_mbytes_per_sec": 0, 00:04:40.450 "r_mbytes_per_sec": 0, 00:04:40.450 "w_mbytes_per_sec": 0 00:04:40.450 }, 00:04:40.450 "claimed": false, 00:04:40.450 "zoned": false, 00:04:40.451 "supported_io_types": { 00:04:40.451 "read": true, 00:04:40.451 "write": true, 00:04:40.451 "unmap": true, 00:04:40.451 "write_zeroes": true, 00:04:40.451 "flush": true, 00:04:40.451 "reset": true, 00:04:40.451 "compare": false, 00:04:40.451 "compare_and_write": false, 00:04:40.451 "abort": true, 00:04:40.451 "nvme_admin": false, 00:04:40.451 "nvme_io": false 00:04:40.451 }, 00:04:40.451 "memory_domains": [ 00:04:40.451 { 00:04:40.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.451 "dma_device_type": 2 00:04:40.451 } 00:04:40.451 ], 00:04:40.451 "driver_specific": {} 00:04:40.451 } 00:04:40.451 ]' 00:04:40.451 08:55:17 -- rpc/rpc.sh@17 -- # jq length 00:04:40.451 08:55:17 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.451 08:55:17 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.451 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 [2024-11-17 08:55:17.191434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.451 [2024-11-17 08:55:17.191508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.451 [2024-11-17 08:55:17.191523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeac4c0 00:04:40.451 [2024-11-17 08:55:17.191531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.451 [2024-11-17 08:55:17.193053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.451 [2024-11-17 08:55:17.193099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.451 Passthru0 00:04:40.451 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.451 08:55:17 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.451 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.451 08:55:17 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.451 { 00:04:40.451 "name": "Malloc0", 00:04:40.451 "aliases": [ 00:04:40.451 "a5fa7c5c-0a48-4245-bb36-2580385fc81f" 00:04:40.451 ], 00:04:40.451 "product_name": "Malloc disk", 00:04:40.451 "block_size": 512, 00:04:40.451 "num_blocks": 16384, 00:04:40.451 "uuid": "a5fa7c5c-0a48-4245-bb36-2580385fc81f", 00:04:40.451 "assigned_rate_limits": { 00:04:40.451 "rw_ios_per_sec": 0, 00:04:40.451 "rw_mbytes_per_sec": 0, 00:04:40.451 "r_mbytes_per_sec": 0, 00:04:40.451 "w_mbytes_per_sec": 0 00:04:40.451 }, 00:04:40.451 "claimed": true, 00:04:40.451 "claim_type": "exclusive_write", 00:04:40.451 "zoned": false, 00:04:40.451 "supported_io_types": { 00:04:40.451 "read": true, 00:04:40.451 "write": true, 00:04:40.451 "unmap": true, 00:04:40.451 "write_zeroes": true, 00:04:40.451 "flush": true, 00:04:40.451 "reset": true, 00:04:40.451 "compare": false, 00:04:40.451 "compare_and_write": false, 00:04:40.451 "abort": true, 00:04:40.451 "nvme_admin": false, 00:04:40.451 "nvme_io": false 00:04:40.451 }, 00:04:40.451 "memory_domains": [ 00:04:40.451 { 00:04:40.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.451 "dma_device_type": 2 00:04:40.451 } 00:04:40.451 ], 00:04:40.451 "driver_specific": {} 00:04:40.451 }, 00:04:40.451 { 00:04:40.451 "name": "Passthru0", 00:04:40.451 "aliases": [ 00:04:40.451 "23b64764-1d42-5a81-a586-2328e1318217" 00:04:40.451 ], 00:04:40.451 "product_name": "passthru", 00:04:40.451 "block_size": 512, 00:04:40.451 "num_blocks": 16384, 00:04:40.451 "uuid": "23b64764-1d42-5a81-a586-2328e1318217", 00:04:40.451 "assigned_rate_limits": { 00:04:40.451 "rw_ios_per_sec": 0, 00:04:40.451 "rw_mbytes_per_sec": 0, 00:04:40.451 "r_mbytes_per_sec": 0, 00:04:40.451 "w_mbytes_per_sec": 0 00:04:40.451 }, 00:04:40.451 "claimed": false, 00:04:40.451 "zoned": false, 00:04:40.451 "supported_io_types": { 00:04:40.451 "read": true, 00:04:40.451 "write": true, 00:04:40.451 "unmap": true, 00:04:40.451 "write_zeroes": true, 00:04:40.451 "flush": true, 00:04:40.451 "reset": true, 00:04:40.451 "compare": false, 00:04:40.451 "compare_and_write": false, 00:04:40.451 "abort": true, 00:04:40.451 "nvme_admin": false, 00:04:40.451 "nvme_io": false 00:04:40.451 }, 00:04:40.451 "memory_domains": [ 00:04:40.451 { 00:04:40.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.451 "dma_device_type": 2 00:04:40.451 } 00:04:40.451 ], 00:04:40.451 "driver_specific": { 00:04:40.451 "passthru": { 00:04:40.451 "name": "Passthru0", 00:04:40.451 "base_bdev_name": "Malloc0" 00:04:40.451 } 00:04:40.451 } 00:04:40.451 } 00:04:40.451 ]' 00:04:40.451 08:55:17 -- rpc/rpc.sh@21 -- # jq length 00:04:40.451 08:55:17 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.451 08:55:17 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.451 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.451 08:55:17 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.451 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.451 08:55:17 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.451 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.451 08:55:17 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.451 08:55:17 -- rpc/rpc.sh@26 -- # jq length 00:04:40.451 08:55:17 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.451 00:04:40.451 real 0m0.319s 00:04:40.451 user 0m0.213s 00:04:40.451 sys 0m0.034s 00:04:40.451 08:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.451 ************************************ 00:04:40.451 END TEST rpc_integrity 00:04:40.451 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 ************************************ 00:04:40.711 08:55:17 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.711 08:55:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.711 08:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 ************************************ 00:04:40.711 START TEST rpc_plugins 00:04:40.711 ************************************ 00:04:40.711 08:55:17 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:40.711 08:55:17 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.711 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.711 08:55:17 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.711 08:55:17 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.711 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.711 08:55:17 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.711 { 00:04:40.711 "name": "Malloc1", 00:04:40.711 "aliases": [ 00:04:40.711 "0f6367e8-021d-474f-b8d3-4f3691763772" 00:04:40.711 ], 00:04:40.711 "product_name": "Malloc disk", 00:04:40.711 "block_size": 4096, 00:04:40.711 "num_blocks": 256, 00:04:40.711 "uuid": "0f6367e8-021d-474f-b8d3-4f3691763772", 00:04:40.711 "assigned_rate_limits": { 00:04:40.711 "rw_ios_per_sec": 0, 00:04:40.711 "rw_mbytes_per_sec": 0, 00:04:40.711 "r_mbytes_per_sec": 0, 00:04:40.711 "w_mbytes_per_sec": 0 00:04:40.711 }, 00:04:40.711 "claimed": false, 00:04:40.711 "zoned": false, 00:04:40.711 "supported_io_types": { 00:04:40.711 "read": true, 00:04:40.711 "write": true, 00:04:40.711 "unmap": true, 00:04:40.711 "write_zeroes": true, 00:04:40.711 "flush": true, 00:04:40.711 "reset": true, 00:04:40.711 "compare": false, 00:04:40.711 "compare_and_write": false, 00:04:40.711 "abort": true, 00:04:40.711 "nvme_admin": false, 00:04:40.711 "nvme_io": false 00:04:40.711 }, 00:04:40.711 "memory_domains": [ 00:04:40.711 { 00:04:40.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.711 "dma_device_type": 2 00:04:40.711 } 00:04:40.711 ], 00:04:40.711 "driver_specific": {} 00:04:40.711 } 00:04:40.711 ]' 00:04:40.711 08:55:17 -- rpc/rpc.sh@32 -- # jq length 00:04:40.711 08:55:17 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.711 08:55:17 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.711 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.711 08:55:17 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.711 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.711 08:55:17 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.711 08:55:17 -- rpc/rpc.sh@36 -- # jq length 00:04:40.711 08:55:17 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.711 00:04:40.711 real 0m0.144s 00:04:40.711 user 0m0.089s 00:04:40.711 sys 0m0.017s 00:04:40.711 08:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 ************************************ 00:04:40.711 END TEST rpc_plugins 00:04:40.711 ************************************ 00:04:40.711 08:55:17 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.711 08:55:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.711 08:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 ************************************ 00:04:40.711 START TEST rpc_trace_cmd_test 00:04:40.711 ************************************ 00:04:40.711 08:55:17 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:40.711 08:55:17 -- rpc/rpc.sh@40 -- # local info 00:04:40.711 08:55:17 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.711 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.711 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.711 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.711 08:55:17 -- rpc/rpc.sh@42 -- # info='{ 00:04:40.711 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53814", 00:04:40.711 "tpoint_group_mask": "0x8", 00:04:40.711 "iscsi_conn": { 00:04:40.711 "mask": "0x2", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "scsi": { 00:04:40.711 "mask": "0x4", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "bdev": { 00:04:40.711 "mask": "0x8", 00:04:40.711 "tpoint_mask": "0xffffffffffffffff" 00:04:40.711 }, 00:04:40.711 "nvmf_rdma": { 00:04:40.711 "mask": "0x10", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "nvmf_tcp": { 00:04:40.711 "mask": "0x20", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "ftl": { 00:04:40.711 "mask": "0x40", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "blobfs": { 00:04:40.711 "mask": "0x80", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "dsa": { 00:04:40.711 "mask": "0x200", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "thread": { 00:04:40.711 "mask": "0x400", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "nvme_pcie": { 00:04:40.711 "mask": "0x800", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "iaa": { 00:04:40.711 "mask": "0x1000", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "nvme_tcp": { 00:04:40.711 "mask": "0x2000", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 }, 00:04:40.711 "bdev_nvme": { 00:04:40.711 "mask": "0x4000", 00:04:40.711 "tpoint_mask": "0x0" 00:04:40.711 } 00:04:40.711 }' 00:04:40.711 08:55:17 -- rpc/rpc.sh@43 -- # jq length 00:04:40.970 08:55:17 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:40.970 08:55:17 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.970 08:55:17 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.970 08:55:17 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.970 08:55:17 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.970 08:55:17 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.970 08:55:17 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.970 08:55:17 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.971 08:55:17 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.971 00:04:40.971 real 0m0.271s 00:04:40.971 user 0m0.232s 00:04:40.971 sys 0m0.031s 00:04:40.971 08:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.971 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.971 ************************************ 00:04:40.971 END TEST rpc_trace_cmd_test 00:04:40.971 ************************************ 00:04:41.230 08:55:17 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.230 08:55:17 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.230 08:55:17 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.230 08:55:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.230 08:55:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.230 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:41.230 ************************************ 00:04:41.230 START TEST rpc_daemon_integrity 00:04:41.230 ************************************ 00:04:41.230 08:55:17 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:41.230 08:55:17 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.230 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.230 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:41.230 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.230 08:55:17 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.230 08:55:17 -- rpc/rpc.sh@13 -- # jq length 00:04:41.230 08:55:17 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.230 08:55:17 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.230 08:55:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.230 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:41.230 08:55:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.230 08:55:17 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.230 08:55:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.230 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.230 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.230 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.230 08:55:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.230 { 00:04:41.230 "name": "Malloc2", 00:04:41.230 "aliases": [ 00:04:41.230 "fb803d81-8532-4dbe-977a-9a4471ada73d" 00:04:41.230 ], 00:04:41.230 "product_name": "Malloc disk", 00:04:41.230 "block_size": 512, 00:04:41.230 "num_blocks": 16384, 00:04:41.230 "uuid": "fb803d81-8532-4dbe-977a-9a4471ada73d", 00:04:41.230 "assigned_rate_limits": { 00:04:41.230 "rw_ios_per_sec": 0, 00:04:41.230 "rw_mbytes_per_sec": 0, 00:04:41.230 "r_mbytes_per_sec": 0, 00:04:41.230 "w_mbytes_per_sec": 0 00:04:41.230 }, 00:04:41.230 "claimed": false, 00:04:41.230 "zoned": false, 00:04:41.230 "supported_io_types": { 00:04:41.230 "read": true, 00:04:41.230 "write": true, 00:04:41.230 "unmap": true, 00:04:41.230 "write_zeroes": true, 00:04:41.230 "flush": true, 00:04:41.230 "reset": true, 00:04:41.230 "compare": false, 00:04:41.230 "compare_and_write": false, 00:04:41.230 "abort": true, 00:04:41.230 "nvme_admin": false, 00:04:41.230 "nvme_io": false 00:04:41.230 }, 00:04:41.230 "memory_domains": [ 00:04:41.230 { 00:04:41.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.230 "dma_device_type": 2 00:04:41.230 } 00:04:41.230 ], 00:04:41.230 "driver_specific": {} 00:04:41.230 } 00:04:41.230 ]' 00:04:41.230 08:55:18 -- rpc/rpc.sh@17 -- # jq length 00:04:41.230 08:55:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.230 08:55:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.230 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.231 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 [2024-11-17 08:55:18.075864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.231 [2024-11-17 08:55:18.075925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.231 [2024-11-17 08:55:18.075956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeacc40 00:04:41.231 [2024-11-17 08:55:18.075980] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.231 [2024-11-17 08:55:18.077220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.231 [2024-11-17 08:55:18.077267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.231 Passthru0 00:04:41.231 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.231 08:55:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.231 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.231 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.231 08:55:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.231 { 00:04:41.231 "name": "Malloc2", 00:04:41.231 "aliases": [ 00:04:41.231 "fb803d81-8532-4dbe-977a-9a4471ada73d" 00:04:41.231 ], 00:04:41.231 "product_name": "Malloc disk", 00:04:41.231 "block_size": 512, 00:04:41.231 "num_blocks": 16384, 00:04:41.231 "uuid": "fb803d81-8532-4dbe-977a-9a4471ada73d", 00:04:41.231 "assigned_rate_limits": { 00:04:41.231 "rw_ios_per_sec": 0, 00:04:41.231 "rw_mbytes_per_sec": 0, 00:04:41.231 "r_mbytes_per_sec": 0, 00:04:41.231 "w_mbytes_per_sec": 0 00:04:41.231 }, 00:04:41.231 "claimed": true, 00:04:41.231 "claim_type": "exclusive_write", 00:04:41.231 "zoned": false, 00:04:41.231 "supported_io_types": { 00:04:41.231 "read": true, 00:04:41.231 "write": true, 00:04:41.231 "unmap": true, 00:04:41.231 "write_zeroes": true, 00:04:41.231 "flush": true, 00:04:41.231 "reset": true, 00:04:41.231 "compare": false, 00:04:41.231 "compare_and_write": false, 00:04:41.231 "abort": true, 00:04:41.231 "nvme_admin": false, 00:04:41.231 "nvme_io": false 00:04:41.231 }, 00:04:41.231 "memory_domains": [ 00:04:41.231 { 00:04:41.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.231 "dma_device_type": 2 00:04:41.231 } 00:04:41.231 ], 00:04:41.231 "driver_specific": {} 00:04:41.231 }, 00:04:41.231 { 00:04:41.231 "name": "Passthru0", 00:04:41.231 "aliases": [ 00:04:41.231 "57cba913-9510-5368-acba-93894e6a9d5c" 00:04:41.231 ], 00:04:41.231 "product_name": "passthru", 00:04:41.231 "block_size": 512, 00:04:41.231 "num_blocks": 16384, 00:04:41.231 "uuid": "57cba913-9510-5368-acba-93894e6a9d5c", 00:04:41.231 "assigned_rate_limits": { 00:04:41.231 "rw_ios_per_sec": 0, 00:04:41.231 "rw_mbytes_per_sec": 0, 00:04:41.231 "r_mbytes_per_sec": 0, 00:04:41.231 "w_mbytes_per_sec": 0 00:04:41.231 }, 00:04:41.231 "claimed": false, 00:04:41.231 "zoned": false, 00:04:41.231 "supported_io_types": { 00:04:41.231 "read": true, 00:04:41.231 "write": true, 00:04:41.231 "unmap": true, 00:04:41.231 "write_zeroes": true, 00:04:41.231 "flush": true, 00:04:41.231 "reset": true, 00:04:41.231 "compare": false, 00:04:41.231 "compare_and_write": false, 00:04:41.231 "abort": true, 00:04:41.231 "nvme_admin": false, 00:04:41.231 "nvme_io": false 00:04:41.231 }, 00:04:41.231 "memory_domains": [ 00:04:41.231 { 00:04:41.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.231 "dma_device_type": 2 00:04:41.231 } 00:04:41.231 ], 00:04:41.231 "driver_specific": { 00:04:41.231 "passthru": { 00:04:41.231 "name": "Passthru0", 00:04:41.231 "base_bdev_name": "Malloc2" 00:04:41.231 } 00:04:41.231 } 00:04:41.231 } 00:04:41.231 ]' 00:04:41.231 08:55:18 -- rpc/rpc.sh@21 -- # jq length 00:04:41.511 08:55:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.511 08:55:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.511 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.511 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.511 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.511 08:55:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.511 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.511 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.511 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.511 08:55:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.511 08:55:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.511 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.511 08:55:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.511 08:55:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.511 08:55:18 -- rpc/rpc.sh@26 -- # jq length 00:04:41.511 08:55:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.511 00:04:41.511 real 0m0.321s 00:04:41.511 user 0m0.218s 00:04:41.511 sys 0m0.038s 00:04:41.511 08:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.511 ************************************ 00:04:41.511 END TEST rpc_daemon_integrity 00:04:41.511 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.511 ************************************ 00:04:41.511 08:55:18 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.511 08:55:18 -- rpc/rpc.sh@84 -- # killprocess 53814 00:04:41.511 08:55:18 -- common/autotest_common.sh@936 -- # '[' -z 53814 ']' 00:04:41.511 08:55:18 -- common/autotest_common.sh@940 -- # kill -0 53814 00:04:41.511 08:55:18 -- common/autotest_common.sh@941 -- # uname 00:04:41.511 08:55:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.511 08:55:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53814 00:04:41.511 08:55:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.511 08:55:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.511 killing process with pid 53814 00:04:41.511 08:55:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53814' 00:04:41.511 08:55:18 -- common/autotest_common.sh@955 -- # kill 53814 00:04:41.511 08:55:18 -- common/autotest_common.sh@960 -- # wait 53814 00:04:41.770 00:04:41.770 real 0m2.800s 00:04:41.770 user 0m3.770s 00:04:41.770 sys 0m0.559s 00:04:41.770 08:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.770 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.770 ************************************ 00:04:41.770 END TEST rpc 00:04:41.770 ************************************ 00:04:41.770 08:55:18 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.770 08:55:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.770 08:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.770 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.770 ************************************ 00:04:41.770 START TEST rpc_client 00:04:41.770 ************************************ 00:04:41.770 08:55:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.031 * Looking for test storage... 00:04:42.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.031 08:55:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.031 08:55:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.031 08:55:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.031 08:55:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.031 08:55:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.031 08:55:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.031 08:55:18 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.031 08:55:18 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.031 08:55:18 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.031 08:55:18 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.031 08:55:18 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.031 08:55:18 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.031 08:55:18 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.031 08:55:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.031 08:55:18 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.031 08:55:18 -- scripts/common.sh@344 -- # : 1 00:04:42.031 08:55:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.031 08:55:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.031 08:55:18 -- scripts/common.sh@364 -- # decimal 1 00:04:42.031 08:55:18 -- scripts/common.sh@352 -- # local d=1 00:04:42.031 08:55:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.031 08:55:18 -- scripts/common.sh@354 -- # echo 1 00:04:42.031 08:55:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.031 08:55:18 -- scripts/common.sh@365 -- # decimal 2 00:04:42.031 08:55:18 -- scripts/common.sh@352 -- # local d=2 00:04:42.031 08:55:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.031 08:55:18 -- scripts/common.sh@354 -- # echo 2 00:04:42.031 08:55:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.031 08:55:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.031 08:55:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.031 08:55:18 -- scripts/common.sh@367 -- # return 0 00:04:42.031 08:55:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.031 --rc genhtml_branch_coverage=1 00:04:42.031 --rc genhtml_function_coverage=1 00:04:42.031 --rc genhtml_legend=1 00:04:42.031 --rc geninfo_all_blocks=1 00:04:42.031 --rc geninfo_unexecuted_blocks=1 00:04:42.031 00:04:42.031 ' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.031 --rc genhtml_branch_coverage=1 00:04:42.031 --rc genhtml_function_coverage=1 00:04:42.031 --rc genhtml_legend=1 00:04:42.031 --rc geninfo_all_blocks=1 00:04:42.031 --rc geninfo_unexecuted_blocks=1 00:04:42.031 00:04:42.031 ' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.031 --rc genhtml_branch_coverage=1 00:04:42.031 --rc genhtml_function_coverage=1 00:04:42.031 --rc genhtml_legend=1 00:04:42.031 --rc geninfo_all_blocks=1 00:04:42.031 --rc geninfo_unexecuted_blocks=1 00:04:42.031 00:04:42.031 ' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.031 --rc genhtml_branch_coverage=1 00:04:42.031 --rc genhtml_function_coverage=1 00:04:42.031 --rc genhtml_legend=1 00:04:42.031 --rc geninfo_all_blocks=1 00:04:42.031 --rc geninfo_unexecuted_blocks=1 00:04:42.031 00:04:42.031 ' 00:04:42.031 08:55:18 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.031 OK 00:04:42.031 08:55:18 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.031 00:04:42.031 real 0m0.199s 00:04:42.031 user 0m0.129s 00:04:42.031 sys 0m0.082s 00:04:42.031 08:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.031 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:42.031 ************************************ 00:04:42.031 END TEST rpc_client 00:04:42.031 ************************************ 00:04:42.031 08:55:18 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.031 08:55:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.031 08:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.031 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:42.031 ************************************ 00:04:42.031 START TEST json_config 00:04:42.031 ************************************ 00:04:42.031 08:55:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.031 08:55:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.031 08:55:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.031 08:55:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.335 08:55:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.335 08:55:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.335 08:55:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.335 08:55:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.335 08:55:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.335 08:55:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.335 08:55:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.335 08:55:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.335 08:55:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.335 08:55:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.335 08:55:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.335 08:55:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.335 08:55:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.335 08:55:19 -- scripts/common.sh@344 -- # : 1 00:04:42.335 08:55:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.335 08:55:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.335 08:55:19 -- scripts/common.sh@364 -- # decimal 1 00:04:42.335 08:55:19 -- scripts/common.sh@352 -- # local d=1 00:04:42.336 08:55:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.336 08:55:19 -- scripts/common.sh@354 -- # echo 1 00:04:42.336 08:55:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.336 08:55:19 -- scripts/common.sh@365 -- # decimal 2 00:04:42.336 08:55:19 -- scripts/common.sh@352 -- # local d=2 00:04:42.336 08:55:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.336 08:55:19 -- scripts/common.sh@354 -- # echo 2 00:04:42.336 08:55:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.336 08:55:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.336 08:55:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.336 08:55:19 -- scripts/common.sh@367 -- # return 0 00:04:42.336 08:55:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.336 08:55:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.336 --rc genhtml_branch_coverage=1 00:04:42.336 --rc genhtml_function_coverage=1 00:04:42.336 --rc genhtml_legend=1 00:04:42.336 --rc geninfo_all_blocks=1 00:04:42.336 --rc geninfo_unexecuted_blocks=1 00:04:42.336 00:04:42.336 ' 00:04:42.336 08:55:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.336 --rc genhtml_branch_coverage=1 00:04:42.336 --rc genhtml_function_coverage=1 00:04:42.336 --rc genhtml_legend=1 00:04:42.336 --rc geninfo_all_blocks=1 00:04:42.336 --rc geninfo_unexecuted_blocks=1 00:04:42.336 00:04:42.336 ' 00:04:42.336 08:55:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.336 --rc genhtml_branch_coverage=1 00:04:42.336 --rc genhtml_function_coverage=1 00:04:42.336 --rc genhtml_legend=1 00:04:42.336 --rc geninfo_all_blocks=1 00:04:42.336 --rc geninfo_unexecuted_blocks=1 00:04:42.336 00:04:42.336 ' 00:04:42.336 08:55:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.336 --rc genhtml_branch_coverage=1 00:04:42.336 --rc genhtml_function_coverage=1 00:04:42.336 --rc genhtml_legend=1 00:04:42.336 --rc geninfo_all_blocks=1 00:04:42.336 --rc geninfo_unexecuted_blocks=1 00:04:42.336 00:04:42.336 ' 00:04:42.336 08:55:19 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.336 08:55:19 -- nvmf/common.sh@7 -- # uname -s 00:04:42.336 08:55:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.336 08:55:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.336 08:55:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.336 08:55:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.336 08:55:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.336 08:55:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.336 08:55:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.336 08:55:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.336 08:55:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.336 08:55:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.336 08:55:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:04:42.336 08:55:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:04:42.336 08:55:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.336 08:55:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.336 08:55:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.336 08:55:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.336 08:55:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.336 08:55:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.336 08:55:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.336 08:55:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.336 08:55:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.336 08:55:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.336 08:55:19 -- paths/export.sh@5 -- # export PATH 00:04:42.336 08:55:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.336 08:55:19 -- nvmf/common.sh@46 -- # : 0 00:04:42.336 08:55:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:42.336 08:55:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:42.336 08:55:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:42.336 08:55:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.336 08:55:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.336 08:55:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:42.336 08:55:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:42.336 08:55:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:42.336 08:55:19 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:42.336 08:55:19 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:42.336 08:55:19 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:42.336 08:55:19 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.336 08:55:19 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.336 08:55:19 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:42.336 08:55:19 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.336 08:55:19 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:42.336 08:55:19 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.336 08:55:19 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:42.336 08:55:19 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:42.336 08:55:19 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:42.336 08:55:19 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:42.336 08:55:19 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.336 INFO: JSON configuration test init 00:04:42.336 08:55:19 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:42.336 08:55:19 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:42.336 08:55:19 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:42.336 08:55:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.336 08:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.336 08:55:19 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:42.336 08:55:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.336 08:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.336 08:55:19 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.336 08:55:19 -- json_config/json_config.sh@98 -- # local app=target 00:04:42.336 08:55:19 -- json_config/json_config.sh@99 -- # shift 00:04:42.336 08:55:19 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:42.336 08:55:19 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:42.336 08:55:19 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:42.336 08:55:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:42.337 08:55:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:42.337 08:55:19 -- json_config/json_config.sh@111 -- # app_pid[$app]=54067 00:04:42.337 Waiting for target to run... 00:04:42.337 08:55:19 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:42.337 08:55:19 -- json_config/json_config.sh@114 -- # waitforlisten 54067 /var/tmp/spdk_tgt.sock 00:04:42.337 08:55:19 -- common/autotest_common.sh@829 -- # '[' -z 54067 ']' 00:04:42.337 08:55:19 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.337 08:55:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.337 08:55:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.337 08:55:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.337 08:55:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.337 08:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.337 [2024-11-17 08:55:19.126108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.337 [2024-11-17 08:55:19.126216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54067 ] 00:04:42.619 [2024-11-17 08:55:19.432204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.619 [2024-11-17 08:55:19.485780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.619 [2024-11-17 08:55:19.485969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.556 08:55:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.556 08:55:20 -- common/autotest_common.sh@862 -- # return 0 00:04:43.556 00:04:43.556 08:55:20 -- json_config/json_config.sh@115 -- # echo '' 00:04:43.556 08:55:20 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:43.556 08:55:20 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:43.556 08:55:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.556 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.556 08:55:20 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:43.556 08:55:20 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:43.556 08:55:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.556 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.556 08:55:20 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.556 08:55:20 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:43.556 08:55:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:43.815 08:55:20 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:43.815 08:55:20 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:43.815 08:55:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.815 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.815 08:55:20 -- json_config/json_config.sh@48 -- # local ret=0 00:04:43.815 08:55:20 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:43.815 08:55:20 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:43.815 08:55:20 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:43.815 08:55:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.815 08:55:20 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:44.074 08:55:20 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:44.074 08:55:20 -- json_config/json_config.sh@51 -- # local get_types 00:04:44.074 08:55:20 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:44.074 08:55:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.074 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.074 08:55:20 -- json_config/json_config.sh@58 -- # return 0 00:04:44.074 08:55:20 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:44.074 08:55:20 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:44.074 08:55:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.074 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.074 08:55:20 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:44.074 08:55:20 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:44.074 08:55:20 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.074 08:55:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.333 MallocForNvmf0 00:04:44.333 08:55:21 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.333 08:55:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.592 MallocForNvmf1 00:04:44.592 08:55:21 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.592 08:55:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.851 [2024-11-17 08:55:21.669842] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.851 08:55:21 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.851 08:55:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.110 08:55:21 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.110 08:55:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.369 08:55:22 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.369 08:55:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.629 08:55:22 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.629 08:55:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.888 [2024-11-17 08:55:22.586647] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.888 08:55:22 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:45.888 08:55:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.888 08:55:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.888 08:55:22 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:45.888 08:55:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.888 08:55:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.888 08:55:22 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:45.888 08:55:22 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.888 08:55:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.147 MallocBdevForConfigChangeCheck 00:04:46.147 08:55:22 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:46.147 08:55:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.147 08:55:22 -- common/autotest_common.sh@10 -- # set +x 00:04:46.147 08:55:22 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:46.147 08:55:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.715 INFO: shutting down applications... 00:04:46.715 08:55:23 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:46.715 08:55:23 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:46.715 08:55:23 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:46.715 08:55:23 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:46.715 08:55:23 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:46.975 Calling clear_iscsi_subsystem 00:04:46.975 Calling clear_nvmf_subsystem 00:04:46.975 Calling clear_nbd_subsystem 00:04:46.975 Calling clear_ublk_subsystem 00:04:46.975 Calling clear_vhost_blk_subsystem 00:04:46.975 Calling clear_vhost_scsi_subsystem 00:04:46.975 Calling clear_scheduler_subsystem 00:04:46.975 Calling clear_bdev_subsystem 00:04:46.975 Calling clear_accel_subsystem 00:04:46.975 Calling clear_vmd_subsystem 00:04:46.975 Calling clear_sock_subsystem 00:04:46.975 Calling clear_iobuf_subsystem 00:04:46.975 08:55:23 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:46.975 08:55:23 -- json_config/json_config.sh@396 -- # count=100 00:04:46.975 08:55:23 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:46.975 08:55:23 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.975 08:55:23 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.975 08:55:23 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.234 08:55:24 -- json_config/json_config.sh@398 -- # break 00:04:47.234 08:55:24 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:47.234 08:55:24 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:47.234 08:55:24 -- json_config/json_config.sh@120 -- # local app=target 00:04:47.234 08:55:24 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:47.234 08:55:24 -- json_config/json_config.sh@124 -- # [[ -n 54067 ]] 00:04:47.234 08:55:24 -- json_config/json_config.sh@127 -- # kill -SIGINT 54067 00:04:47.234 08:55:24 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:47.234 08:55:24 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:47.234 08:55:24 -- json_config/json_config.sh@130 -- # kill -0 54067 00:04:47.234 08:55:24 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:47.801 08:55:24 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:47.801 08:55:24 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:47.801 08:55:24 -- json_config/json_config.sh@130 -- # kill -0 54067 00:04:47.801 08:55:24 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:47.801 08:55:24 -- json_config/json_config.sh@132 -- # break 00:04:47.801 SPDK target shutdown done 00:04:47.801 08:55:24 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:47.801 08:55:24 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:47.801 INFO: relaunching applications... 00:04:47.801 08:55:24 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:47.801 08:55:24 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.801 08:55:24 -- json_config/json_config.sh@98 -- # local app=target 00:04:47.801 08:55:24 -- json_config/json_config.sh@99 -- # shift 00:04:47.801 08:55:24 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:47.801 08:55:24 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:47.801 08:55:24 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:47.801 08:55:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:47.801 08:55:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:47.801 Waiting for target to run... 00:04:47.801 08:55:24 -- json_config/json_config.sh@111 -- # app_pid[$app]=54262 00:04:47.801 08:55:24 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:47.801 08:55:24 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.801 08:55:24 -- json_config/json_config.sh@114 -- # waitforlisten 54262 /var/tmp/spdk_tgt.sock 00:04:47.801 08:55:24 -- common/autotest_common.sh@829 -- # '[' -z 54262 ']' 00:04:47.801 08:55:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.801 08:55:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.801 08:55:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.801 08:55:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.801 08:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:47.801 [2024-11-17 08:55:24.673327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:47.802 [2024-11-17 08:55:24.673441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54262 ] 00:04:48.060 [2024-11-17 08:55:24.986419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.320 [2024-11-17 08:55:25.030109] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.320 [2024-11-17 08:55:25.030534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.579 [2024-11-17 08:55:25.326212] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.579 [2024-11-17 08:55:25.358275] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.838 00:04:48.838 INFO: Checking if target configuration is the same... 00:04:48.838 08:55:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.838 08:55:25 -- common/autotest_common.sh@862 -- # return 0 00:04:48.838 08:55:25 -- json_config/json_config.sh@115 -- # echo '' 00:04:48.838 08:55:25 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:48.838 08:55:25 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.838 08:55:25 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:48.838 08:55:25 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.838 08:55:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.838 + '[' 2 -ne 2 ']' 00:04:48.838 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.838 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.838 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.838 +++ basename /dev/fd/62 00:04:48.838 ++ mktemp /tmp/62.XXX 00:04:48.838 + tmp_file_1=/tmp/62.hBx 00:04:48.838 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.838 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.838 + tmp_file_2=/tmp/spdk_tgt_config.json.zXn 00:04:48.838 + ret=0 00:04:48.838 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.406 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.406 + diff -u /tmp/62.hBx /tmp/spdk_tgt_config.json.zXn 00:04:49.406 INFO: JSON config files are the same 00:04:49.406 + echo 'INFO: JSON config files are the same' 00:04:49.406 + rm /tmp/62.hBx /tmp/spdk_tgt_config.json.zXn 00:04:49.406 + exit 0 00:04:49.406 INFO: changing configuration and checking if this can be detected... 00:04:49.406 08:55:26 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:49.406 08:55:26 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:49.406 08:55:26 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.406 08:55:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.666 08:55:26 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:49.666 08:55:26 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.666 08:55:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.666 + '[' 2 -ne 2 ']' 00:04:49.666 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:49.666 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:49.666 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.666 +++ basename /dev/fd/62 00:04:49.666 ++ mktemp /tmp/62.XXX 00:04:49.666 + tmp_file_1=/tmp/62.TvK 00:04:49.666 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.666 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.666 + tmp_file_2=/tmp/spdk_tgt_config.json.BqX 00:04:49.666 + ret=0 00:04:49.666 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.184 + diff -u /tmp/62.TvK /tmp/spdk_tgt_config.json.BqX 00:04:50.184 + ret=1 00:04:50.184 + echo '=== Start of file: /tmp/62.TvK ===' 00:04:50.184 + cat /tmp/62.TvK 00:04:50.184 + echo '=== End of file: /tmp/62.TvK ===' 00:04:50.184 + echo '' 00:04:50.184 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BqX ===' 00:04:50.184 + cat /tmp/spdk_tgt_config.json.BqX 00:04:50.184 + echo '=== End of file: /tmp/spdk_tgt_config.json.BqX ===' 00:04:50.184 + echo '' 00:04:50.184 + rm /tmp/62.TvK /tmp/spdk_tgt_config.json.BqX 00:04:50.184 + exit 1 00:04:50.184 INFO: configuration change detected. 00:04:50.184 08:55:26 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:50.184 08:55:26 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:50.184 08:55:26 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:50.184 08:55:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.184 08:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 08:55:26 -- json_config/json_config.sh@360 -- # local ret=0 00:04:50.184 08:55:26 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:50.184 08:55:26 -- json_config/json_config.sh@370 -- # [[ -n 54262 ]] 00:04:50.184 08:55:26 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:50.184 08:55:26 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:50.184 08:55:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.184 08:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 08:55:26 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:50.184 08:55:26 -- json_config/json_config.sh@246 -- # uname -s 00:04:50.184 08:55:26 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:50.184 08:55:26 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:50.184 08:55:26 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:50.184 08:55:26 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:50.184 08:55:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.184 08:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:50.184 08:55:26 -- json_config/json_config.sh@376 -- # killprocess 54262 00:04:50.184 08:55:26 -- common/autotest_common.sh@936 -- # '[' -z 54262 ']' 00:04:50.184 08:55:26 -- common/autotest_common.sh@940 -- # kill -0 54262 00:04:50.184 08:55:26 -- common/autotest_common.sh@941 -- # uname 00:04:50.184 08:55:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.184 08:55:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54262 00:04:50.184 killing process with pid 54262 00:04:50.184 08:55:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.184 08:55:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.184 08:55:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54262' 00:04:50.184 08:55:26 -- common/autotest_common.sh@955 -- # kill 54262 00:04:50.184 08:55:26 -- common/autotest_common.sh@960 -- # wait 54262 00:04:50.443 08:55:27 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.443 08:55:27 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:50.443 08:55:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.443 08:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.443 INFO: Success 00:04:50.443 08:55:27 -- json_config/json_config.sh@381 -- # return 0 00:04:50.443 08:55:27 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:50.443 00:04:50.444 real 0m8.343s 00:04:50.444 user 0m12.211s 00:04:50.444 sys 0m1.386s 00:04:50.444 ************************************ 00:04:50.444 END TEST json_config 00:04:50.444 ************************************ 00:04:50.444 08:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.444 08:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.444 08:55:27 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.444 08:55:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.444 08:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.444 08:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.444 ************************************ 00:04:50.444 START TEST json_config_extra_key 00:04:50.444 ************************************ 00:04:50.444 08:55:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.444 08:55:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.444 08:55:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.444 08:55:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.703 08:55:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.703 08:55:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.703 08:55:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.703 08:55:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.703 08:55:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.703 08:55:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.703 08:55:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.703 08:55:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.703 08:55:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.703 08:55:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.703 08:55:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.703 08:55:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.703 08:55:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.703 08:55:27 -- scripts/common.sh@344 -- # : 1 00:04:50.703 08:55:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.703 08:55:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.703 08:55:27 -- scripts/common.sh@364 -- # decimal 1 00:04:50.703 08:55:27 -- scripts/common.sh@352 -- # local d=1 00:04:50.703 08:55:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.703 08:55:27 -- scripts/common.sh@354 -- # echo 1 00:04:50.703 08:55:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.703 08:55:27 -- scripts/common.sh@365 -- # decimal 2 00:04:50.703 08:55:27 -- scripts/common.sh@352 -- # local d=2 00:04:50.703 08:55:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.703 08:55:27 -- scripts/common.sh@354 -- # echo 2 00:04:50.703 08:55:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.703 08:55:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.704 08:55:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.704 08:55:27 -- scripts/common.sh@367 -- # return 0 00:04:50.704 08:55:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.704 08:55:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.704 --rc genhtml_branch_coverage=1 00:04:50.704 --rc genhtml_function_coverage=1 00:04:50.704 --rc genhtml_legend=1 00:04:50.704 --rc geninfo_all_blocks=1 00:04:50.704 --rc geninfo_unexecuted_blocks=1 00:04:50.704 00:04:50.704 ' 00:04:50.704 08:55:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.704 --rc genhtml_branch_coverage=1 00:04:50.704 --rc genhtml_function_coverage=1 00:04:50.704 --rc genhtml_legend=1 00:04:50.704 --rc geninfo_all_blocks=1 00:04:50.704 --rc geninfo_unexecuted_blocks=1 00:04:50.704 00:04:50.704 ' 00:04:50.704 08:55:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.704 --rc genhtml_branch_coverage=1 00:04:50.704 --rc genhtml_function_coverage=1 00:04:50.704 --rc genhtml_legend=1 00:04:50.704 --rc geninfo_all_blocks=1 00:04:50.704 --rc geninfo_unexecuted_blocks=1 00:04:50.704 00:04:50.704 ' 00:04:50.704 08:55:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.704 --rc genhtml_branch_coverage=1 00:04:50.704 --rc genhtml_function_coverage=1 00:04:50.704 --rc genhtml_legend=1 00:04:50.704 --rc geninfo_all_blocks=1 00:04:50.704 --rc geninfo_unexecuted_blocks=1 00:04:50.704 00:04:50.704 ' 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.704 08:55:27 -- nvmf/common.sh@7 -- # uname -s 00:04:50.704 08:55:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.704 08:55:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.704 08:55:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.704 08:55:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.704 08:55:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.704 08:55:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.704 08:55:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.704 08:55:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.704 08:55:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.704 08:55:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.704 08:55:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:04:50.704 08:55:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:04:50.704 08:55:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.704 08:55:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.704 08:55:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.704 08:55:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.704 08:55:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.704 08:55:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.704 08:55:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.704 08:55:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.704 08:55:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.704 08:55:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.704 08:55:27 -- paths/export.sh@5 -- # export PATH 00:04:50.704 08:55:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.704 08:55:27 -- nvmf/common.sh@46 -- # : 0 00:04:50.704 08:55:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.704 08:55:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.704 08:55:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.704 08:55:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.704 08:55:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.704 08:55:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.704 08:55:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.704 08:55:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:50.704 INFO: launching applications... 00:04:50.704 Waiting for target to run... 00:04:50.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54405 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54405 /var/tmp/spdk_tgt.sock 00:04:50.704 08:55:27 -- common/autotest_common.sh@829 -- # '[' -z 54405 ']' 00:04:50.704 08:55:27 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.704 08:55:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.704 08:55:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.704 08:55:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.704 08:55:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.704 08:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.704 [2024-11-17 08:55:27.579881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.704 [2024-11-17 08:55:27.581073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54405 ] 00:04:51.274 [2024-11-17 08:55:27.892981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.274 [2024-11-17 08:55:27.935532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.274 [2024-11-17 08:55:27.935995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.842 08:55:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.842 08:55:28 -- common/autotest_common.sh@862 -- # return 0 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:51.842 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:51.842 INFO: shutting down applications... 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54405 ]] 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54405 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54405 00:04:51.842 08:55:28 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:52.419 08:55:29 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:52.420 SPDK target shutdown done 00:04:52.420 Success 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54405 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:52.420 08:55:29 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:52.420 00:04:52.420 real 0m1.811s 00:04:52.420 user 0m1.699s 00:04:52.420 sys 0m0.313s 00:04:52.420 ************************************ 00:04:52.420 END TEST json_config_extra_key 00:04:52.420 ************************************ 00:04:52.420 08:55:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.420 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.420 08:55:29 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.420 08:55:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.420 08:55:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.420 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.421 ************************************ 00:04:52.421 START TEST alias_rpc 00:04:52.421 ************************************ 00:04:52.421 08:55:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.421 * Looking for test storage... 00:04:52.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.421 08:55:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.421 08:55:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.421 08:55:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.421 08:55:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.421 08:55:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.421 08:55:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.421 08:55:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.421 08:55:29 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.421 08:55:29 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.421 08:55:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.421 08:55:29 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.421 08:55:29 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.421 08:55:29 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.421 08:55:29 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.421 08:55:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.421 08:55:29 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.421 08:55:29 -- scripts/common.sh@344 -- # : 1 00:04:52.421 08:55:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.421 08:55:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.421 08:55:29 -- scripts/common.sh@364 -- # decimal 1 00:04:52.421 08:55:29 -- scripts/common.sh@352 -- # local d=1 00:04:52.421 08:55:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.421 08:55:29 -- scripts/common.sh@354 -- # echo 1 00:04:52.421 08:55:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.421 08:55:29 -- scripts/common.sh@365 -- # decimal 2 00:04:52.421 08:55:29 -- scripts/common.sh@352 -- # local d=2 00:04:52.421 08:55:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.422 08:55:29 -- scripts/common.sh@354 -- # echo 2 00:04:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.422 08:55:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.422 08:55:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.422 08:55:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.422 08:55:29 -- scripts/common.sh@367 -- # return 0 00:04:52.422 08:55:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.422 08:55:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.422 --rc genhtml_branch_coverage=1 00:04:52.422 --rc genhtml_function_coverage=1 00:04:52.422 --rc genhtml_legend=1 00:04:52.422 --rc geninfo_all_blocks=1 00:04:52.422 --rc geninfo_unexecuted_blocks=1 00:04:52.422 00:04:52.422 ' 00:04:52.422 08:55:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.422 --rc genhtml_branch_coverage=1 00:04:52.422 --rc genhtml_function_coverage=1 00:04:52.422 --rc genhtml_legend=1 00:04:52.422 --rc geninfo_all_blocks=1 00:04:52.422 --rc geninfo_unexecuted_blocks=1 00:04:52.422 00:04:52.422 ' 00:04:52.422 08:55:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.422 --rc genhtml_branch_coverage=1 00:04:52.422 --rc genhtml_function_coverage=1 00:04:52.422 --rc genhtml_legend=1 00:04:52.422 --rc geninfo_all_blocks=1 00:04:52.422 --rc geninfo_unexecuted_blocks=1 00:04:52.422 00:04:52.422 ' 00:04:52.422 08:55:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.422 --rc genhtml_branch_coverage=1 00:04:52.422 --rc genhtml_function_coverage=1 00:04:52.422 --rc genhtml_legend=1 00:04:52.422 --rc geninfo_all_blocks=1 00:04:52.422 --rc geninfo_unexecuted_blocks=1 00:04:52.422 00:04:52.422 ' 00:04:52.423 08:55:29 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.423 08:55:29 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54482 00:04:52.423 08:55:29 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54482 00:04:52.423 08:55:29 -- common/autotest_common.sh@829 -- # '[' -z 54482 ']' 00:04:52.423 08:55:29 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.423 08:55:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.423 08:55:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.423 08:55:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.423 08:55:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.423 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.688 [2024-11-17 08:55:29.360297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.688 [2024-11-17 08:55:29.360542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54482 ] 00:04:52.688 [2024-11-17 08:55:29.492121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.688 [2024-11-17 08:55:29.547676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.688 [2024-11-17 08:55:29.548149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.624 08:55:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.624 08:55:30 -- common/autotest_common.sh@862 -- # return 0 00:04:53.624 08:55:30 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.883 08:55:30 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54482 00:04:53.883 08:55:30 -- common/autotest_common.sh@936 -- # '[' -z 54482 ']' 00:04:53.883 08:55:30 -- common/autotest_common.sh@940 -- # kill -0 54482 00:04:53.883 08:55:30 -- common/autotest_common.sh@941 -- # uname 00:04:53.883 08:55:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.883 08:55:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54482 00:04:53.883 08:55:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.883 killing process with pid 54482 00:04:53.883 08:55:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.883 08:55:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54482' 00:04:53.883 08:55:30 -- common/autotest_common.sh@955 -- # kill 54482 00:04:53.883 08:55:30 -- common/autotest_common.sh@960 -- # wait 54482 00:04:54.142 ************************************ 00:04:54.142 END TEST alias_rpc 00:04:54.142 ************************************ 00:04:54.142 00:04:54.142 real 0m1.897s 00:04:54.142 user 0m2.348s 00:04:54.142 sys 0m0.351s 00:04:54.142 08:55:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.142 08:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.401 08:55:31 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:54.401 08:55:31 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.401 08:55:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.401 08:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.401 ************************************ 00:04:54.401 START TEST spdkcli_tcp 00:04:54.401 ************************************ 00:04:54.401 08:55:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.401 * Looking for test storage... 00:04:54.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.401 08:55:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.401 08:55:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.401 08:55:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.401 08:55:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.401 08:55:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.401 08:55:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.401 08:55:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.401 08:55:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.401 08:55:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.401 08:55:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.401 08:55:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.401 08:55:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.401 08:55:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.401 08:55:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.401 08:55:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.401 08:55:31 -- scripts/common.sh@344 -- # : 1 00:04:54.401 08:55:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.401 08:55:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.401 08:55:31 -- scripts/common.sh@364 -- # decimal 1 00:04:54.401 08:55:31 -- scripts/common.sh@352 -- # local d=1 00:04:54.401 08:55:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.401 08:55:31 -- scripts/common.sh@354 -- # echo 1 00:04:54.401 08:55:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.401 08:55:31 -- scripts/common.sh@365 -- # decimal 2 00:04:54.401 08:55:31 -- scripts/common.sh@352 -- # local d=2 00:04:54.401 08:55:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.401 08:55:31 -- scripts/common.sh@354 -- # echo 2 00:04:54.401 08:55:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.401 08:55:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.401 08:55:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.401 08:55:31 -- scripts/common.sh@367 -- # return 0 00:04:54.401 08:55:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.401 --rc genhtml_branch_coverage=1 00:04:54.401 --rc genhtml_function_coverage=1 00:04:54.401 --rc genhtml_legend=1 00:04:54.401 --rc geninfo_all_blocks=1 00:04:54.401 --rc geninfo_unexecuted_blocks=1 00:04:54.401 00:04:54.401 ' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.401 --rc genhtml_branch_coverage=1 00:04:54.401 --rc genhtml_function_coverage=1 00:04:54.401 --rc genhtml_legend=1 00:04:54.401 --rc geninfo_all_blocks=1 00:04:54.401 --rc geninfo_unexecuted_blocks=1 00:04:54.401 00:04:54.401 ' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.401 --rc genhtml_branch_coverage=1 00:04:54.401 --rc genhtml_function_coverage=1 00:04:54.401 --rc genhtml_legend=1 00:04:54.401 --rc geninfo_all_blocks=1 00:04:54.401 --rc geninfo_unexecuted_blocks=1 00:04:54.401 00:04:54.401 ' 00:04:54.401 08:55:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.401 --rc genhtml_branch_coverage=1 00:04:54.401 --rc genhtml_function_coverage=1 00:04:54.401 --rc genhtml_legend=1 00:04:54.401 --rc geninfo_all_blocks=1 00:04:54.401 --rc geninfo_unexecuted_blocks=1 00:04:54.401 00:04:54.401 ' 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.401 08:55:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.401 08:55:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.401 08:55:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.401 08:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54565 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 54565 00:04:54.401 08:55:31 -- common/autotest_common.sh@829 -- # '[' -z 54565 ']' 00:04:54.401 08:55:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.401 08:55:31 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.401 08:55:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.401 08:55:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.401 08:55:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.401 08:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.401 [2024-11-17 08:55:31.316945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.401 [2024-11-17 08:55:31.317047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54565 ] 00:04:54.660 [2024-11-17 08:55:31.455050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.660 [2024-11-17 08:55:31.525753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.660 [2024-11-17 08:55:31.526095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.660 [2024-11-17 08:55:31.526269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.594 08:55:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.594 08:55:32 -- common/autotest_common.sh@862 -- # return 0 00:04:55.594 08:55:32 -- spdkcli/tcp.sh@31 -- # socat_pid=54582 00:04:55.594 08:55:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.594 08:55:32 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.594 [ 00:04:55.594 "bdev_malloc_delete", 00:04:55.594 "bdev_malloc_create", 00:04:55.594 "bdev_null_resize", 00:04:55.594 "bdev_null_delete", 00:04:55.594 "bdev_null_create", 00:04:55.594 "bdev_nvme_cuse_unregister", 00:04:55.594 "bdev_nvme_cuse_register", 00:04:55.594 "bdev_opal_new_user", 00:04:55.594 "bdev_opal_set_lock_state", 00:04:55.594 "bdev_opal_delete", 00:04:55.594 "bdev_opal_get_info", 00:04:55.594 "bdev_opal_create", 00:04:55.594 "bdev_nvme_opal_revert", 00:04:55.594 "bdev_nvme_opal_init", 00:04:55.594 "bdev_nvme_send_cmd", 00:04:55.594 "bdev_nvme_get_path_iostat", 00:04:55.594 "bdev_nvme_get_mdns_discovery_info", 00:04:55.594 "bdev_nvme_stop_mdns_discovery", 00:04:55.594 "bdev_nvme_start_mdns_discovery", 00:04:55.594 "bdev_nvme_set_multipath_policy", 00:04:55.594 "bdev_nvme_set_preferred_path", 00:04:55.594 "bdev_nvme_get_io_paths", 00:04:55.594 "bdev_nvme_remove_error_injection", 00:04:55.594 "bdev_nvme_add_error_injection", 00:04:55.594 "bdev_nvme_get_discovery_info", 00:04:55.594 "bdev_nvme_stop_discovery", 00:04:55.594 "bdev_nvme_start_discovery", 00:04:55.594 "bdev_nvme_get_controller_health_info", 00:04:55.594 "bdev_nvme_disable_controller", 00:04:55.594 "bdev_nvme_enable_controller", 00:04:55.594 "bdev_nvme_reset_controller", 00:04:55.594 "bdev_nvme_get_transport_statistics", 00:04:55.594 "bdev_nvme_apply_firmware", 00:04:55.594 "bdev_nvme_detach_controller", 00:04:55.594 "bdev_nvme_get_controllers", 00:04:55.594 "bdev_nvme_attach_controller", 00:04:55.594 "bdev_nvme_set_hotplug", 00:04:55.594 "bdev_nvme_set_options", 00:04:55.594 "bdev_passthru_delete", 00:04:55.594 "bdev_passthru_create", 00:04:55.594 "bdev_lvol_grow_lvstore", 00:04:55.594 "bdev_lvol_get_lvols", 00:04:55.594 "bdev_lvol_get_lvstores", 00:04:55.594 "bdev_lvol_delete", 00:04:55.594 "bdev_lvol_set_read_only", 00:04:55.594 "bdev_lvol_resize", 00:04:55.594 "bdev_lvol_decouple_parent", 00:04:55.594 "bdev_lvol_inflate", 00:04:55.594 "bdev_lvol_rename", 00:04:55.594 "bdev_lvol_clone_bdev", 00:04:55.594 "bdev_lvol_clone", 00:04:55.594 "bdev_lvol_snapshot", 00:04:55.594 "bdev_lvol_create", 00:04:55.594 "bdev_lvol_delete_lvstore", 00:04:55.594 "bdev_lvol_rename_lvstore", 00:04:55.594 "bdev_lvol_create_lvstore", 00:04:55.594 "bdev_raid_set_options", 00:04:55.594 "bdev_raid_remove_base_bdev", 00:04:55.594 "bdev_raid_add_base_bdev", 00:04:55.594 "bdev_raid_delete", 00:04:55.594 "bdev_raid_create", 00:04:55.594 "bdev_raid_get_bdevs", 00:04:55.594 "bdev_error_inject_error", 00:04:55.594 "bdev_error_delete", 00:04:55.594 "bdev_error_create", 00:04:55.594 "bdev_split_delete", 00:04:55.594 "bdev_split_create", 00:04:55.594 "bdev_delay_delete", 00:04:55.594 "bdev_delay_create", 00:04:55.594 "bdev_delay_update_latency", 00:04:55.594 "bdev_zone_block_delete", 00:04:55.594 "bdev_zone_block_create", 00:04:55.594 "blobfs_create", 00:04:55.594 "blobfs_detect", 00:04:55.594 "blobfs_set_cache_size", 00:04:55.594 "bdev_aio_delete", 00:04:55.594 "bdev_aio_rescan", 00:04:55.594 "bdev_aio_create", 00:04:55.594 "bdev_ftl_set_property", 00:04:55.594 "bdev_ftl_get_properties", 00:04:55.594 "bdev_ftl_get_stats", 00:04:55.594 "bdev_ftl_unmap", 00:04:55.594 "bdev_ftl_unload", 00:04:55.594 "bdev_ftl_delete", 00:04:55.594 "bdev_ftl_load", 00:04:55.594 "bdev_ftl_create", 00:04:55.594 "bdev_virtio_attach_controller", 00:04:55.594 "bdev_virtio_scsi_get_devices", 00:04:55.594 "bdev_virtio_detach_controller", 00:04:55.594 "bdev_virtio_blk_set_hotplug", 00:04:55.594 "bdev_iscsi_delete", 00:04:55.594 "bdev_iscsi_create", 00:04:55.594 "bdev_iscsi_set_options", 00:04:55.594 "bdev_uring_delete", 00:04:55.594 "bdev_uring_create", 00:04:55.594 "accel_error_inject_error", 00:04:55.594 "ioat_scan_accel_module", 00:04:55.594 "dsa_scan_accel_module", 00:04:55.594 "iaa_scan_accel_module", 00:04:55.594 "vfu_virtio_create_scsi_endpoint", 00:04:55.594 "vfu_virtio_scsi_remove_target", 00:04:55.594 "vfu_virtio_scsi_add_target", 00:04:55.594 "vfu_virtio_create_blk_endpoint", 00:04:55.594 "vfu_virtio_delete_endpoint", 00:04:55.594 "iscsi_set_options", 00:04:55.594 "iscsi_get_auth_groups", 00:04:55.594 "iscsi_auth_group_remove_secret", 00:04:55.594 "iscsi_auth_group_add_secret", 00:04:55.594 "iscsi_delete_auth_group", 00:04:55.594 "iscsi_create_auth_group", 00:04:55.594 "iscsi_set_discovery_auth", 00:04:55.594 "iscsi_get_options", 00:04:55.594 "iscsi_target_node_request_logout", 00:04:55.594 "iscsi_target_node_set_redirect", 00:04:55.594 "iscsi_target_node_set_auth", 00:04:55.594 "iscsi_target_node_add_lun", 00:04:55.594 "iscsi_get_connections", 00:04:55.594 "iscsi_portal_group_set_auth", 00:04:55.594 "iscsi_start_portal_group", 00:04:55.594 "iscsi_delete_portal_group", 00:04:55.594 "iscsi_create_portal_group", 00:04:55.594 "iscsi_get_portal_groups", 00:04:55.594 "iscsi_delete_target_node", 00:04:55.594 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.594 "iscsi_target_node_add_pg_ig_maps", 00:04:55.594 "iscsi_create_target_node", 00:04:55.594 "iscsi_get_target_nodes", 00:04:55.594 "iscsi_delete_initiator_group", 00:04:55.594 "iscsi_initiator_group_remove_initiators", 00:04:55.594 "iscsi_initiator_group_add_initiators", 00:04:55.594 "iscsi_create_initiator_group", 00:04:55.594 "iscsi_get_initiator_groups", 00:04:55.594 "nvmf_set_crdt", 00:04:55.594 "nvmf_set_config", 00:04:55.594 "nvmf_set_max_subsystems", 00:04:55.594 "nvmf_subsystem_get_listeners", 00:04:55.594 "nvmf_subsystem_get_qpairs", 00:04:55.594 "nvmf_subsystem_get_controllers", 00:04:55.594 "nvmf_get_stats", 00:04:55.594 "nvmf_get_transports", 00:04:55.594 "nvmf_create_transport", 00:04:55.594 "nvmf_get_targets", 00:04:55.594 "nvmf_delete_target", 00:04:55.594 "nvmf_create_target", 00:04:55.594 "nvmf_subsystem_allow_any_host", 00:04:55.594 "nvmf_subsystem_remove_host", 00:04:55.594 "nvmf_subsystem_add_host", 00:04:55.594 "nvmf_subsystem_remove_ns", 00:04:55.594 "nvmf_subsystem_add_ns", 00:04:55.594 "nvmf_subsystem_listener_set_ana_state", 00:04:55.594 "nvmf_discovery_get_referrals", 00:04:55.594 "nvmf_discovery_remove_referral", 00:04:55.594 "nvmf_discovery_add_referral", 00:04:55.594 "nvmf_subsystem_remove_listener", 00:04:55.594 "nvmf_subsystem_add_listener", 00:04:55.594 "nvmf_delete_subsystem", 00:04:55.594 "nvmf_create_subsystem", 00:04:55.594 "nvmf_get_subsystems", 00:04:55.594 "env_dpdk_get_mem_stats", 00:04:55.594 "nbd_get_disks", 00:04:55.594 "nbd_stop_disk", 00:04:55.594 "nbd_start_disk", 00:04:55.594 "ublk_recover_disk", 00:04:55.594 "ublk_get_disks", 00:04:55.594 "ublk_stop_disk", 00:04:55.595 "ublk_start_disk", 00:04:55.595 "ublk_destroy_target", 00:04:55.595 "ublk_create_target", 00:04:55.595 "virtio_blk_create_transport", 00:04:55.595 "virtio_blk_get_transports", 00:04:55.595 "vhost_controller_set_coalescing", 00:04:55.595 "vhost_get_controllers", 00:04:55.595 "vhost_delete_controller", 00:04:55.595 "vhost_create_blk_controller", 00:04:55.595 "vhost_scsi_controller_remove_target", 00:04:55.595 "vhost_scsi_controller_add_target", 00:04:55.595 "vhost_start_scsi_controller", 00:04:55.595 "vhost_create_scsi_controller", 00:04:55.595 "thread_set_cpumask", 00:04:55.595 "framework_get_scheduler", 00:04:55.595 "framework_set_scheduler", 00:04:55.595 "framework_get_reactors", 00:04:55.595 "thread_get_io_channels", 00:04:55.595 "thread_get_pollers", 00:04:55.595 "thread_get_stats", 00:04:55.595 "framework_monitor_context_switch", 00:04:55.595 "spdk_kill_instance", 00:04:55.595 "log_enable_timestamps", 00:04:55.595 "log_get_flags", 00:04:55.595 "log_clear_flag", 00:04:55.595 "log_set_flag", 00:04:55.595 "log_get_level", 00:04:55.595 "log_set_level", 00:04:55.595 "log_get_print_level", 00:04:55.595 "log_set_print_level", 00:04:55.595 "framework_enable_cpumask_locks", 00:04:55.595 "framework_disable_cpumask_locks", 00:04:55.595 "framework_wait_init", 00:04:55.595 "framework_start_init", 00:04:55.595 "scsi_get_devices", 00:04:55.595 "bdev_get_histogram", 00:04:55.595 "bdev_enable_histogram", 00:04:55.595 "bdev_set_qos_limit", 00:04:55.595 "bdev_set_qd_sampling_period", 00:04:55.595 "bdev_get_bdevs", 00:04:55.595 "bdev_reset_iostat", 00:04:55.595 "bdev_get_iostat", 00:04:55.595 "bdev_examine", 00:04:55.595 "bdev_wait_for_examine", 00:04:55.595 "bdev_set_options", 00:04:55.595 "notify_get_notifications", 00:04:55.595 "notify_get_types", 00:04:55.595 "accel_get_stats", 00:04:55.595 "accel_set_options", 00:04:55.595 "accel_set_driver", 00:04:55.595 "accel_crypto_key_destroy", 00:04:55.595 "accel_crypto_keys_get", 00:04:55.595 "accel_crypto_key_create", 00:04:55.595 "accel_assign_opc", 00:04:55.595 "accel_get_module_info", 00:04:55.595 "accel_get_opc_assignments", 00:04:55.595 "vmd_rescan", 00:04:55.595 "vmd_remove_device", 00:04:55.595 "vmd_enable", 00:04:55.595 "sock_set_default_impl", 00:04:55.595 "sock_impl_set_options", 00:04:55.595 "sock_impl_get_options", 00:04:55.595 "iobuf_get_stats", 00:04:55.595 "iobuf_set_options", 00:04:55.595 "framework_get_pci_devices", 00:04:55.595 "framework_get_config", 00:04:55.595 "framework_get_subsystems", 00:04:55.595 "vfu_tgt_set_base_path", 00:04:55.595 "trace_get_info", 00:04:55.595 "trace_get_tpoint_group_mask", 00:04:55.595 "trace_disable_tpoint_group", 00:04:55.595 "trace_enable_tpoint_group", 00:04:55.595 "trace_clear_tpoint_mask", 00:04:55.595 "trace_set_tpoint_mask", 00:04:55.595 "spdk_get_version", 00:04:55.595 "rpc_get_methods" 00:04:55.595 ] 00:04:55.853 08:55:32 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.853 08:55:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.853 08:55:32 -- common/autotest_common.sh@10 -- # set +x 00:04:55.853 08:55:32 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.853 08:55:32 -- spdkcli/tcp.sh@38 -- # killprocess 54565 00:04:55.853 08:55:32 -- common/autotest_common.sh@936 -- # '[' -z 54565 ']' 00:04:55.853 08:55:32 -- common/autotest_common.sh@940 -- # kill -0 54565 00:04:55.853 08:55:32 -- common/autotest_common.sh@941 -- # uname 00:04:55.853 08:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.853 08:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54565 00:04:55.853 killing process with pid 54565 00:04:55.853 08:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:55.853 08:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:55.853 08:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54565' 00:04:55.853 08:55:32 -- common/autotest_common.sh@955 -- # kill 54565 00:04:55.853 08:55:32 -- common/autotest_common.sh@960 -- # wait 54565 00:04:56.110 ************************************ 00:04:56.110 END TEST spdkcli_tcp 00:04:56.110 ************************************ 00:04:56.110 00:04:56.110 real 0m1.799s 00:04:56.110 user 0m3.396s 00:04:56.110 sys 0m0.375s 00:04:56.110 08:55:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.110 08:55:32 -- common/autotest_common.sh@10 -- # set +x 00:04:56.110 08:55:32 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.110 08:55:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.110 08:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.110 08:55:32 -- common/autotest_common.sh@10 -- # set +x 00:04:56.110 ************************************ 00:04:56.110 START TEST dpdk_mem_utility 00:04:56.110 ************************************ 00:04:56.110 08:55:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.110 * Looking for test storage... 00:04:56.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.110 08:55:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.110 08:55:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.111 08:55:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.369 08:55:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.369 08:55:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.369 08:55:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.369 08:55:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.369 08:55:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.369 08:55:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.369 08:55:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.369 08:55:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.369 08:55:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.369 08:55:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.369 08:55:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.369 08:55:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.369 08:55:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.369 08:55:33 -- scripts/common.sh@344 -- # : 1 00:04:56.369 08:55:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.369 08:55:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.369 08:55:33 -- scripts/common.sh@364 -- # decimal 1 00:04:56.369 08:55:33 -- scripts/common.sh@352 -- # local d=1 00:04:56.369 08:55:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.369 08:55:33 -- scripts/common.sh@354 -- # echo 1 00:04:56.369 08:55:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.369 08:55:33 -- scripts/common.sh@365 -- # decimal 2 00:04:56.369 08:55:33 -- scripts/common.sh@352 -- # local d=2 00:04:56.369 08:55:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.369 08:55:33 -- scripts/common.sh@354 -- # echo 2 00:04:56.369 08:55:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.369 08:55:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.369 08:55:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.369 08:55:33 -- scripts/common.sh@367 -- # return 0 00:04:56.369 08:55:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.369 08:55:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.369 --rc genhtml_branch_coverage=1 00:04:56.369 --rc genhtml_function_coverage=1 00:04:56.369 --rc genhtml_legend=1 00:04:56.369 --rc geninfo_all_blocks=1 00:04:56.369 --rc geninfo_unexecuted_blocks=1 00:04:56.369 00:04:56.369 ' 00:04:56.369 08:55:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.369 --rc genhtml_branch_coverage=1 00:04:56.369 --rc genhtml_function_coverage=1 00:04:56.369 --rc genhtml_legend=1 00:04:56.369 --rc geninfo_all_blocks=1 00:04:56.369 --rc geninfo_unexecuted_blocks=1 00:04:56.369 00:04:56.369 ' 00:04:56.369 08:55:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.369 --rc genhtml_branch_coverage=1 00:04:56.369 --rc genhtml_function_coverage=1 00:04:56.369 --rc genhtml_legend=1 00:04:56.369 --rc geninfo_all_blocks=1 00:04:56.369 --rc geninfo_unexecuted_blocks=1 00:04:56.369 00:04:56.369 ' 00:04:56.369 08:55:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.369 --rc genhtml_branch_coverage=1 00:04:56.369 --rc genhtml_function_coverage=1 00:04:56.369 --rc genhtml_legend=1 00:04:56.369 --rc geninfo_all_blocks=1 00:04:56.369 --rc geninfo_unexecuted_blocks=1 00:04:56.369 00:04:56.369 ' 00:04:56.369 08:55:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.369 08:55:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54663 00:04:56.369 08:55:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.369 08:55:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54663 00:04:56.369 08:55:33 -- common/autotest_common.sh@829 -- # '[' -z 54663 ']' 00:04:56.369 08:55:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.369 08:55:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.369 08:55:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.369 08:55:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.369 08:55:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.369 [2024-11-17 08:55:33.164430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.369 [2024-11-17 08:55:33.164534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54663 ] 00:04:56.628 [2024-11-17 08:55:33.299245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.628 [2024-11-17 08:55:33.352964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.628 [2024-11-17 08:55:33.353110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.194 08:55:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.194 08:55:34 -- common/autotest_common.sh@862 -- # return 0 00:04:57.194 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.194 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.194 08:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.194 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.455 { 00:04:57.455 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.455 } 00:04:57.455 08:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.455 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.455 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:57.455 1 heaps totaling size 814.000000 MiB 00:04:57.455 size: 814.000000 MiB heap id: 0 00:04:57.455 end heaps---------- 00:04:57.455 8 mempools totaling size 598.116089 MiB 00:04:57.455 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.455 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.455 size: 84.521057 MiB name: bdev_io_54663 00:04:57.455 size: 51.011292 MiB name: evtpool_54663 00:04:57.455 size: 50.003479 MiB name: msgpool_54663 00:04:57.455 size: 21.763794 MiB name: PDU_Pool 00:04:57.455 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.455 size: 0.026123 MiB name: Session_Pool 00:04:57.455 end mempools------- 00:04:57.455 6 memzones totaling size 4.142822 MiB 00:04:57.455 size: 1.000366 MiB name: RG_ring_0_54663 00:04:57.455 size: 1.000366 MiB name: RG_ring_1_54663 00:04:57.455 size: 1.000366 MiB name: RG_ring_4_54663 00:04:57.455 size: 1.000366 MiB name: RG_ring_5_54663 00:04:57.455 size: 0.125366 MiB name: RG_ring_2_54663 00:04:57.455 size: 0.015991 MiB name: RG_ring_3_54663 00:04:57.455 end memzones------- 00:04:57.455 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.455 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:04:57.455 list of free elements. size: 12.471375 MiB 00:04:57.455 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:57.455 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:57.455 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:57.455 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:57.455 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:57.455 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:57.455 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:57.455 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:57.455 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:57.455 element at address: 0x20001aa00000 with size: 0.569153 MiB 00:04:57.455 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:57.455 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:57.455 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:57.455 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:57.455 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:57.455 list of standard malloc elements. size: 199.266052 MiB 00:04:57.455 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:57.455 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:57.455 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:57.455 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:57.455 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:57.455 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:57.455 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:57.455 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:57.455 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:57.455 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:57.455 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:57.456 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:57.457 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:57.457 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:57.457 list of memzone associated elements. size: 602.262573 MiB 00:04:57.457 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:57.457 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.457 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:57.457 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.457 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:57.457 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54663_0 00:04:57.457 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:57.457 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54663_0 00:04:57.457 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:57.457 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54663_0 00:04:57.457 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:57.457 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.457 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:57.457 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.457 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:57.457 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54663 00:04:57.457 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:57.457 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54663 00:04:57.457 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:57.457 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54663 00:04:57.457 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:57.457 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.457 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:57.457 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.457 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:57.457 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.457 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:57.457 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.457 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:57.457 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54663 00:04:57.457 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:57.457 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54663 00:04:57.457 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:57.457 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54663 00:04:57.457 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:57.457 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54663 00:04:57.457 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:57.457 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54663 00:04:57.457 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:57.457 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.457 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:57.457 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.457 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:57.457 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.457 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:57.457 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54663 00:04:57.457 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:57.457 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.457 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:57.457 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.457 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:57.457 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54663 00:04:57.457 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:57.457 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.457 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:57.458 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54663 00:04:57.458 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:57.458 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54663 00:04:57.458 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:57.458 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.458 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.458 08:55:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54663 00:04:57.458 08:55:34 -- common/autotest_common.sh@936 -- # '[' -z 54663 ']' 00:04:57.458 08:55:34 -- common/autotest_common.sh@940 -- # kill -0 54663 00:04:57.458 08:55:34 -- common/autotest_common.sh@941 -- # uname 00:04:57.458 08:55:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.458 08:55:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54663 00:04:57.458 08:55:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.458 08:55:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.458 killing process with pid 54663 00:04:57.458 08:55:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54663' 00:04:57.458 08:55:34 -- common/autotest_common.sh@955 -- # kill 54663 00:04:57.458 08:55:34 -- common/autotest_common.sh@960 -- # wait 54663 00:04:57.716 ************************************ 00:04:57.716 END TEST dpdk_mem_utility 00:04:57.716 ************************************ 00:04:57.716 00:04:57.716 real 0m1.641s 00:04:57.716 user 0m1.862s 00:04:57.716 sys 0m0.335s 00:04:57.716 08:55:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.716 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.716 08:55:34 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.716 08:55:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.716 08:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.716 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.716 ************************************ 00:04:57.716 START TEST event 00:04:57.716 ************************************ 00:04:57.716 08:55:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.975 * Looking for test storage... 00:04:57.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.975 08:55:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.975 08:55:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.975 08:55:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.975 08:55:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.975 08:55:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.975 08:55:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.975 08:55:34 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.975 08:55:34 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.975 08:55:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.975 08:55:34 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.975 08:55:34 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.975 08:55:34 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.975 08:55:34 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.975 08:55:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.975 08:55:34 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.975 08:55:34 -- scripts/common.sh@344 -- # : 1 00:04:57.975 08:55:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.975 08:55:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.975 08:55:34 -- scripts/common.sh@364 -- # decimal 1 00:04:57.975 08:55:34 -- scripts/common.sh@352 -- # local d=1 00:04:57.975 08:55:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.975 08:55:34 -- scripts/common.sh@354 -- # echo 1 00:04:57.975 08:55:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.975 08:55:34 -- scripts/common.sh@365 -- # decimal 2 00:04:57.975 08:55:34 -- scripts/common.sh@352 -- # local d=2 00:04:57.975 08:55:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.975 08:55:34 -- scripts/common.sh@354 -- # echo 2 00:04:57.975 08:55:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.975 08:55:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.975 08:55:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.975 08:55:34 -- scripts/common.sh@367 -- # return 0 00:04:57.975 08:55:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.975 --rc genhtml_branch_coverage=1 00:04:57.975 --rc genhtml_function_coverage=1 00:04:57.975 --rc genhtml_legend=1 00:04:57.975 --rc geninfo_all_blocks=1 00:04:57.975 --rc geninfo_unexecuted_blocks=1 00:04:57.975 00:04:57.975 ' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.975 --rc genhtml_branch_coverage=1 00:04:57.975 --rc genhtml_function_coverage=1 00:04:57.975 --rc genhtml_legend=1 00:04:57.975 --rc geninfo_all_blocks=1 00:04:57.975 --rc geninfo_unexecuted_blocks=1 00:04:57.975 00:04:57.975 ' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.975 --rc genhtml_branch_coverage=1 00:04:57.975 --rc genhtml_function_coverage=1 00:04:57.975 --rc genhtml_legend=1 00:04:57.975 --rc geninfo_all_blocks=1 00:04:57.975 --rc geninfo_unexecuted_blocks=1 00:04:57.975 00:04:57.975 ' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.975 --rc genhtml_branch_coverage=1 00:04:57.975 --rc genhtml_function_coverage=1 00:04:57.975 --rc genhtml_legend=1 00:04:57.975 --rc geninfo_all_blocks=1 00:04:57.975 --rc geninfo_unexecuted_blocks=1 00:04:57.975 00:04:57.975 ' 00:04:57.975 08:55:34 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.975 08:55:34 -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.975 08:55:34 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.975 08:55:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:57.975 08:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.975 08:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.975 ************************************ 00:04:57.975 START TEST event_perf 00:04:57.975 ************************************ 00:04:57.975 08:55:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.975 Running I/O for 1 seconds...[2024-11-17 08:55:34.851566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.975 [2024-11-17 08:55:34.851816] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54742 ] 00:04:58.234 [2024-11-17 08:55:34.993765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.234 [2024-11-17 08:55:35.067805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.234 [2024-11-17 08:55:35.067875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.234 [2024-11-17 08:55:35.068010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.234 Running I/O for 1 seconds...[2024-11-17 08:55:35.068018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.656 00:04:59.656 lcore 0: 188772 00:04:59.656 lcore 1: 188770 00:04:59.656 lcore 2: 188770 00:04:59.656 lcore 3: 188772 00:04:59.656 done. 00:04:59.656 00:04:59.656 real 0m1.340s 00:04:59.656 user 0m4.161s 00:04:59.656 sys 0m0.054s 00:04:59.656 08:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.656 08:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.656 ************************************ 00:04:59.656 END TEST event_perf 00:04:59.656 ************************************ 00:04:59.656 08:55:36 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.656 08:55:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:59.656 08:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.656 08:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.656 ************************************ 00:04:59.656 START TEST event_reactor 00:04:59.656 ************************************ 00:04:59.656 08:55:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.656 [2024-11-17 08:55:36.238449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.656 [2024-11-17 08:55:36.238711] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54780 ] 00:04:59.656 [2024-11-17 08:55:36.372811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.656 [2024-11-17 08:55:36.431507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.593 test_start 00:05:00.593 oneshot 00:05:00.593 tick 100 00:05:00.593 tick 100 00:05:00.593 tick 250 00:05:00.593 tick 100 00:05:00.593 tick 100 00:05:00.593 tick 100 00:05:00.593 tick 250 00:05:00.593 tick 500 00:05:00.593 tick 100 00:05:00.593 tick 100 00:05:00.593 tick 250 00:05:00.593 tick 100 00:05:00.593 tick 100 00:05:00.593 test_end 00:05:00.593 ************************************ 00:05:00.593 END TEST event_reactor 00:05:00.593 ************************************ 00:05:00.593 00:05:00.593 real 0m1.295s 00:05:00.593 user 0m1.150s 00:05:00.593 sys 0m0.039s 00:05:00.593 08:55:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.593 08:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.851 08:55:37 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.851 08:55:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:00.851 08:55:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.851 08:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.851 ************************************ 00:05:00.851 START TEST event_reactor_perf 00:05:00.851 ************************************ 00:05:00.851 08:55:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.851 [2024-11-17 08:55:37.589079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.851 [2024-11-17 08:55:37.589169] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54810 ] 00:05:00.851 [2024-11-17 08:55:37.725576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.111 [2024-11-17 08:55:37.782101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.049 test_start 00:05:02.049 test_end 00:05:02.049 Performance: 451526 events per second 00:05:02.049 ************************************ 00:05:02.049 END TEST event_reactor_perf 00:05:02.049 ************************************ 00:05:02.049 00:05:02.049 real 0m1.288s 00:05:02.049 user 0m1.142s 00:05:02.049 sys 0m0.042s 00:05:02.049 08:55:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.049 08:55:38 -- common/autotest_common.sh@10 -- # set +x 00:05:02.049 08:55:38 -- event/event.sh@49 -- # uname -s 00:05:02.049 08:55:38 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.049 08:55:38 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.049 08:55:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.049 08:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.049 08:55:38 -- common/autotest_common.sh@10 -- # set +x 00:05:02.049 ************************************ 00:05:02.049 START TEST event_scheduler 00:05:02.049 ************************************ 00:05:02.049 08:55:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.308 * Looking for test storage... 00:05:02.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.308 08:55:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.308 08:55:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.308 08:55:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.308 08:55:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.308 08:55:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.308 08:55:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.308 08:55:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.308 08:55:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.308 08:55:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.308 08:55:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.308 08:55:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.308 08:55:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.308 08:55:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.308 08:55:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.308 08:55:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.308 08:55:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.308 08:55:39 -- scripts/common.sh@344 -- # : 1 00:05:02.308 08:55:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.308 08:55:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.308 08:55:39 -- scripts/common.sh@364 -- # decimal 1 00:05:02.308 08:55:39 -- scripts/common.sh@352 -- # local d=1 00:05:02.308 08:55:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.308 08:55:39 -- scripts/common.sh@354 -- # echo 1 00:05:02.308 08:55:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.308 08:55:39 -- scripts/common.sh@365 -- # decimal 2 00:05:02.308 08:55:39 -- scripts/common.sh@352 -- # local d=2 00:05:02.308 08:55:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.308 08:55:39 -- scripts/common.sh@354 -- # echo 2 00:05:02.308 08:55:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.308 08:55:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.308 08:55:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.308 08:55:39 -- scripts/common.sh@367 -- # return 0 00:05:02.308 08:55:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.308 08:55:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.308 --rc genhtml_branch_coverage=1 00:05:02.308 --rc genhtml_function_coverage=1 00:05:02.308 --rc genhtml_legend=1 00:05:02.308 --rc geninfo_all_blocks=1 00:05:02.308 --rc geninfo_unexecuted_blocks=1 00:05:02.308 00:05:02.308 ' 00:05:02.308 08:55:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.308 --rc genhtml_branch_coverage=1 00:05:02.308 --rc genhtml_function_coverage=1 00:05:02.308 --rc genhtml_legend=1 00:05:02.308 --rc geninfo_all_blocks=1 00:05:02.308 --rc geninfo_unexecuted_blocks=1 00:05:02.308 00:05:02.308 ' 00:05:02.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.308 08:55:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.308 --rc genhtml_branch_coverage=1 00:05:02.308 --rc genhtml_function_coverage=1 00:05:02.308 --rc genhtml_legend=1 00:05:02.308 --rc geninfo_all_blocks=1 00:05:02.308 --rc geninfo_unexecuted_blocks=1 00:05:02.308 00:05:02.308 ' 00:05:02.308 08:55:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.308 --rc genhtml_branch_coverage=1 00:05:02.308 --rc genhtml_function_coverage=1 00:05:02.308 --rc genhtml_legend=1 00:05:02.308 --rc geninfo_all_blocks=1 00:05:02.308 --rc geninfo_unexecuted_blocks=1 00:05:02.308 00:05:02.308 ' 00:05:02.308 08:55:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.308 08:55:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54884 00:05:02.308 08:55:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.308 08:55:39 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.308 08:55:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 54884 00:05:02.308 08:55:39 -- common/autotest_common.sh@829 -- # '[' -z 54884 ']' 00:05:02.308 08:55:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.308 08:55:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.308 08:55:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.308 08:55:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.308 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.308 [2024-11-17 08:55:39.144304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.308 [2024-11-17 08:55:39.144614] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54884 ] 00:05:02.566 [2024-11-17 08:55:39.285757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.566 [2024-11-17 08:55:39.359107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.566 [2024-11-17 08:55:39.359234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.566 [2024-11-17 08:55:39.359353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.566 [2024-11-17 08:55:39.359356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.566 08:55:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.566 08:55:39 -- common/autotest_common.sh@862 -- # return 0 00:05:02.566 08:55:39 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:02.566 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.566 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.566 POWER: Env isn't set yet! 00:05:02.566 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:02.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.566 POWER: Attempting to initialise PSTAT power management... 00:05:02.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.566 POWER: Cannot set governor of lcore 0 to performance 00:05:02.566 POWER: Attempting to initialise AMD PSTATE power management... 00:05:02.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.566 POWER: Attempting to initialise CPPC power management... 00:05:02.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.566 POWER: Attempting to initialise VM power management... 00:05:02.566 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:02.566 POWER: Unable to set Power Management Environment for lcore 0 00:05:02.566 [2024-11-17 08:55:39.427713] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:02.566 [2024-11-17 08:55:39.427800] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:02.566 [2024-11-17 08:55:39.427876] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:02.566 [2024-11-17 08:55:39.428086] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:02.567 [2024-11-17 08:55:39.428199] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:02.567 [2024-11-17 08:55:39.428279] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:02.567 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.567 08:55:39 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:02.567 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.567 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.567 [2024-11-17 08:55:39.487926] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:02.567 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.567 08:55:39 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:02.567 08:55:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.567 08:55:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.567 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 ************************************ 00:05:02.826 START TEST scheduler_create_thread 00:05:02.826 ************************************ 00:05:02.826 08:55:39 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 2 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 3 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 4 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 5 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 6 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 7 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 8 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 9 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 10 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 08:55:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.826 08:55:39 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.826 08:55:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.826 08:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.204 08:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.204 08:55:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.204 08:55:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.204 08:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.204 08:55:41 -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 ************************************ 00:05:05.580 END TEST scheduler_create_thread 00:05:05.580 ************************************ 00:05:05.580 08:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.580 00:05:05.580 real 0m2.615s 00:05:05.580 user 0m0.018s 00:05:05.580 sys 0m0.006s 00:05:05.580 08:55:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.580 08:55:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.580 08:55:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.580 08:55:42 -- scheduler/scheduler.sh@46 -- # killprocess 54884 00:05:05.580 08:55:42 -- common/autotest_common.sh@936 -- # '[' -z 54884 ']' 00:05:05.580 08:55:42 -- common/autotest_common.sh@940 -- # kill -0 54884 00:05:05.580 08:55:42 -- common/autotest_common.sh@941 -- # uname 00:05:05.580 08:55:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.580 08:55:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54884 00:05:05.580 killing process with pid 54884 00:05:05.580 08:55:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:05.580 08:55:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:05.580 08:55:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54884' 00:05:05.580 08:55:42 -- common/autotest_common.sh@955 -- # kill 54884 00:05:05.580 08:55:42 -- common/autotest_common.sh@960 -- # wait 54884 00:05:05.839 [2024-11-17 08:55:42.595840] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.099 00:05:06.099 real 0m3.854s 00:05:06.099 user 0m5.726s 00:05:06.099 sys 0m0.291s 00:05:06.099 ************************************ 00:05:06.099 END TEST event_scheduler 00:05:06.099 ************************************ 00:05:06.099 08:55:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.099 08:55:42 -- common/autotest_common.sh@10 -- # set +x 00:05:06.099 08:55:42 -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.099 08:55:42 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.099 08:55:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.099 08:55:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.099 08:55:42 -- common/autotest_common.sh@10 -- # set +x 00:05:06.099 ************************************ 00:05:06.099 START TEST app_repeat 00:05:06.099 ************************************ 00:05:06.099 08:55:42 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:06.099 08:55:42 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.099 08:55:42 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.099 08:55:42 -- event/event.sh@13 -- # local nbd_list 00:05:06.099 08:55:42 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.099 08:55:42 -- event/event.sh@14 -- # local bdev_list 00:05:06.099 08:55:42 -- event/event.sh@15 -- # local repeat_times=4 00:05:06.099 08:55:42 -- event/event.sh@17 -- # modprobe nbd 00:05:06.099 Process app_repeat pid: 54965 00:05:06.099 spdk_app_start Round 0 00:05:06.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.099 08:55:42 -- event/event.sh@19 -- # repeat_pid=54965 00:05:06.099 08:55:42 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.099 08:55:42 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54965' 00:05:06.099 08:55:42 -- event/event.sh@23 -- # for i in {0..2} 00:05:06.099 08:55:42 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.099 08:55:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.099 08:55:42 -- event/event.sh@25 -- # waitforlisten 54965 /var/tmp/spdk-nbd.sock 00:05:06.099 08:55:42 -- common/autotest_common.sh@829 -- # '[' -z 54965 ']' 00:05:06.099 08:55:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.099 08:55:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.099 08:55:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.099 08:55:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.099 08:55:42 -- common/autotest_common.sh@10 -- # set +x 00:05:06.099 [2024-11-17 08:55:42.852501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.099 [2024-11-17 08:55:42.852610] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54965 ] 00:05:06.099 [2024-11-17 08:55:42.990787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.358 [2024-11-17 08:55:43.046685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.358 [2024-11-17 08:55:43.046705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.294 08:55:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.294 08:55:43 -- common/autotest_common.sh@862 -- # return 0 00:05:07.294 08:55:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.294 Malloc0 00:05:07.294 08:55:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.553 Malloc1 00:05:07.553 08:55:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.553 08:55:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.812 /dev/nbd0 00:05:07.812 08:55:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.812 08:55:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.812 08:55:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.812 08:55:44 -- common/autotest_common.sh@867 -- # local i 00:05:07.812 08:55:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.812 08:55:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.812 08:55:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.812 08:55:44 -- common/autotest_common.sh@871 -- # break 00:05:07.812 08:55:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.812 08:55:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.812 08:55:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.812 1+0 records in 00:05:07.812 1+0 records out 00:05:07.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282742 s, 14.5 MB/s 00:05:07.812 08:55:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.812 08:55:44 -- common/autotest_common.sh@884 -- # size=4096 00:05:07.812 08:55:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.812 08:55:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.812 08:55:44 -- common/autotest_common.sh@887 -- # return 0 00:05:07.812 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.812 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.812 08:55:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.071 /dev/nbd1 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.071 08:55:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.071 08:55:44 -- common/autotest_common.sh@867 -- # local i 00:05:08.071 08:55:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.071 08:55:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.071 08:55:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.071 08:55:44 -- common/autotest_common.sh@871 -- # break 00:05:08.071 08:55:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.071 08:55:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.071 08:55:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.071 1+0 records in 00:05:08.071 1+0 records out 00:05:08.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293156 s, 14.0 MB/s 00:05:08.071 08:55:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.071 08:55:44 -- common/autotest_common.sh@884 -- # size=4096 00:05:08.071 08:55:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.071 08:55:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.071 08:55:44 -- common/autotest_common.sh@887 -- # return 0 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.071 08:55:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.330 08:55:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.330 { 00:05:08.330 "nbd_device": "/dev/nbd0", 00:05:08.330 "bdev_name": "Malloc0" 00:05:08.330 }, 00:05:08.330 { 00:05:08.330 "nbd_device": "/dev/nbd1", 00:05:08.330 "bdev_name": "Malloc1" 00:05:08.330 } 00:05:08.330 ]' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.589 { 00:05:08.589 "nbd_device": "/dev/nbd0", 00:05:08.589 "bdev_name": "Malloc0" 00:05:08.589 }, 00:05:08.589 { 00:05:08.589 "nbd_device": "/dev/nbd1", 00:05:08.589 "bdev_name": "Malloc1" 00:05:08.589 } 00:05:08.589 ]' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.589 /dev/nbd1' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.589 /dev/nbd1' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.589 256+0 records in 00:05:08.589 256+0 records out 00:05:08.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010513 s, 99.7 MB/s 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.589 256+0 records in 00:05:08.589 256+0 records out 00:05:08.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242332 s, 43.3 MB/s 00:05:08.589 08:55:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.590 256+0 records in 00:05:08.590 256+0 records out 00:05:08.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02625 s, 39.9 MB/s 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.590 08:55:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@41 -- # break 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.849 08:55:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@41 -- # break 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.108 08:55:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@65 -- # true 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.367 08:55:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.367 08:55:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.936 08:55:46 -- event/event.sh@35 -- # sleep 3 00:05:09.936 [2024-11-17 08:55:46.710370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.936 [2024-11-17 08:55:46.758211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.936 [2024-11-17 08:55:46.758222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.936 [2024-11-17 08:55:46.787196] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.936 [2024-11-17 08:55:46.787256] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.225 spdk_app_start Round 1 00:05:13.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.225 08:55:49 -- event/event.sh@23 -- # for i in {0..2} 00:05:13.225 08:55:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.225 08:55:49 -- event/event.sh@25 -- # waitforlisten 54965 /var/tmp/spdk-nbd.sock 00:05:13.225 08:55:49 -- common/autotest_common.sh@829 -- # '[' -z 54965 ']' 00:05:13.225 08:55:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.225 08:55:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.225 08:55:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.225 08:55:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.225 08:55:49 -- common/autotest_common.sh@10 -- # set +x 00:05:13.225 08:55:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.225 08:55:49 -- common/autotest_common.sh@862 -- # return 0 00:05:13.225 08:55:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.225 Malloc0 00:05:13.225 08:55:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.485 Malloc1 00:05:13.485 08:55:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@12 -- # local i 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.485 08:55:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.752 /dev/nbd0 00:05:13.752 08:55:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.752 08:55:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.752 08:55:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.752 08:55:50 -- common/autotest_common.sh@867 -- # local i 00:05:13.752 08:55:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.752 08:55:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.752 08:55:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.752 08:55:50 -- common/autotest_common.sh@871 -- # break 00:05:13.752 08:55:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.752 08:55:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.752 08:55:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.752 1+0 records in 00:05:13.752 1+0 records out 00:05:13.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410087 s, 10.0 MB/s 00:05:13.752 08:55:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.752 08:55:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:13.752 08:55:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.752 08:55:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.752 08:55:50 -- common/autotest_common.sh@887 -- # return 0 00:05:13.752 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.752 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.752 08:55:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.042 /dev/nbd1 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.042 08:55:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.042 08:55:50 -- common/autotest_common.sh@867 -- # local i 00:05:14.042 08:55:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.042 08:55:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.042 08:55:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.042 08:55:50 -- common/autotest_common.sh@871 -- # break 00:05:14.042 08:55:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.042 08:55:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.042 08:55:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.042 1+0 records in 00:05:14.042 1+0 records out 00:05:14.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447387 s, 9.2 MB/s 00:05:14.042 08:55:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.042 08:55:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:14.042 08:55:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.042 08:55:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.042 08:55:50 -- common/autotest_common.sh@887 -- # return 0 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.042 08:55:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.306 { 00:05:14.306 "nbd_device": "/dev/nbd0", 00:05:14.306 "bdev_name": "Malloc0" 00:05:14.306 }, 00:05:14.306 { 00:05:14.306 "nbd_device": "/dev/nbd1", 00:05:14.306 "bdev_name": "Malloc1" 00:05:14.306 } 00:05:14.306 ]' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.306 { 00:05:14.306 "nbd_device": "/dev/nbd0", 00:05:14.306 "bdev_name": "Malloc0" 00:05:14.306 }, 00:05:14.306 { 00:05:14.306 "nbd_device": "/dev/nbd1", 00:05:14.306 "bdev_name": "Malloc1" 00:05:14.306 } 00:05:14.306 ]' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.306 /dev/nbd1' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.306 /dev/nbd1' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.306 08:55:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.565 256+0 records in 00:05:14.565 256+0 records out 00:05:14.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00950835 s, 110 MB/s 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.565 256+0 records in 00:05:14.565 256+0 records out 00:05:14.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254329 s, 41.2 MB/s 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.565 256+0 records in 00:05:14.565 256+0 records out 00:05:14.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247868 s, 42.3 MB/s 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@51 -- # local i 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.565 08:55:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.824 08:55:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@41 -- # break 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.825 08:55:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@41 -- # break 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.084 08:55:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@65 -- # true 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.343 08:55:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.343 08:55:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.602 08:55:52 -- event/event.sh@35 -- # sleep 3 00:05:15.860 [2024-11-17 08:55:52.618429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.861 [2024-11-17 08:55:52.666484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.861 [2024-11-17 08:55:52.666493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.861 [2024-11-17 08:55:52.694117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.861 [2024-11-17 08:55:52.694187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.149 spdk_app_start Round 2 00:05:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.149 08:55:55 -- event/event.sh@23 -- # for i in {0..2} 00:05:19.149 08:55:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:19.149 08:55:55 -- event/event.sh@25 -- # waitforlisten 54965 /var/tmp/spdk-nbd.sock 00:05:19.149 08:55:55 -- common/autotest_common.sh@829 -- # '[' -z 54965 ']' 00:05:19.149 08:55:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.149 08:55:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.149 08:55:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.149 08:55:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.149 08:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:19.149 08:55:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.149 08:55:55 -- common/autotest_common.sh@862 -- # return 0 00:05:19.149 08:55:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.149 Malloc0 00:05:19.149 08:55:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.408 Malloc1 00:05:19.408 08:55:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.408 08:55:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.667 /dev/nbd0 00:05:19.667 08:55:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.667 08:55:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.667 08:55:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.667 08:55:56 -- common/autotest_common.sh@867 -- # local i 00:05:19.667 08:55:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.667 08:55:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.668 08:55:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.668 08:55:56 -- common/autotest_common.sh@871 -- # break 00:05:19.668 08:55:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.668 08:55:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.668 08:55:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.668 1+0 records in 00:05:19.668 1+0 records out 00:05:19.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151868 s, 27.0 MB/s 00:05:19.668 08:55:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.668 08:55:56 -- common/autotest_common.sh@884 -- # size=4096 00:05:19.668 08:55:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.668 08:55:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.668 08:55:56 -- common/autotest_common.sh@887 -- # return 0 00:05:19.668 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.668 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.668 08:55:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.927 /dev/nbd1 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.927 08:55:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.927 08:55:56 -- common/autotest_common.sh@867 -- # local i 00:05:19.927 08:55:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.927 08:55:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.927 08:55:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.927 08:55:56 -- common/autotest_common.sh@871 -- # break 00:05:19.927 08:55:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.927 08:55:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.927 08:55:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.927 1+0 records in 00:05:19.927 1+0 records out 00:05:19.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178419 s, 23.0 MB/s 00:05:19.927 08:55:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.927 08:55:56 -- common/autotest_common.sh@884 -- # size=4096 00:05:19.927 08:55:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.927 08:55:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.927 08:55:56 -- common/autotest_common.sh@887 -- # return 0 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.927 08:55:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.186 08:55:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.186 { 00:05:20.186 "nbd_device": "/dev/nbd0", 00:05:20.186 "bdev_name": "Malloc0" 00:05:20.186 }, 00:05:20.186 { 00:05:20.186 "nbd_device": "/dev/nbd1", 00:05:20.186 "bdev_name": "Malloc1" 00:05:20.186 } 00:05:20.186 ]' 00:05:20.186 08:55:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.186 { 00:05:20.186 "nbd_device": "/dev/nbd0", 00:05:20.186 "bdev_name": "Malloc0" 00:05:20.186 }, 00:05:20.186 { 00:05:20.186 "nbd_device": "/dev/nbd1", 00:05:20.186 "bdev_name": "Malloc1" 00:05:20.186 } 00:05:20.186 ]' 00:05:20.186 08:55:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.445 /dev/nbd1' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.445 /dev/nbd1' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.445 256+0 records in 00:05:20.445 256+0 records out 00:05:20.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618399 s, 170 MB/s 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.445 256+0 records in 00:05:20.445 256+0 records out 00:05:20.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213726 s, 49.1 MB/s 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.445 256+0 records in 00:05:20.445 256+0 records out 00:05:20.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263568 s, 39.8 MB/s 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.445 08:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@41 -- # break 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.704 08:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@41 -- # break 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.963 08:55:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.222 08:55:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.222 08:55:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.222 08:55:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.222 08:55:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@65 -- # true 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.222 08:55:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.223 08:55:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.223 08:55:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.223 08:55:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.481 08:55:58 -- event/event.sh@35 -- # sleep 3 00:05:21.741 [2024-11-17 08:55:58.426634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.741 [2024-11-17 08:55:58.476253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.741 [2024-11-17 08:55:58.476264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.741 [2024-11-17 08:55:58.504294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.741 [2024-11-17 08:55:58.504364] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.029 08:56:01 -- event/event.sh@38 -- # waitforlisten 54965 /var/tmp/spdk-nbd.sock 00:05:25.029 08:56:01 -- common/autotest_common.sh@829 -- # '[' -z 54965 ']' 00:05:25.029 08:56:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.029 08:56:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.029 08:56:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.029 08:56:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.030 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.030 08:56:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.030 08:56:01 -- common/autotest_common.sh@862 -- # return 0 00:05:25.030 08:56:01 -- event/event.sh@39 -- # killprocess 54965 00:05:25.030 08:56:01 -- common/autotest_common.sh@936 -- # '[' -z 54965 ']' 00:05:25.030 08:56:01 -- common/autotest_common.sh@940 -- # kill -0 54965 00:05:25.030 08:56:01 -- common/autotest_common.sh@941 -- # uname 00:05:25.030 08:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.030 08:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54965 00:05:25.030 killing process with pid 54965 00:05:25.030 08:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.030 08:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.030 08:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54965' 00:05:25.030 08:56:01 -- common/autotest_common.sh@955 -- # kill 54965 00:05:25.030 08:56:01 -- common/autotest_common.sh@960 -- # wait 54965 00:05:25.030 spdk_app_start is called in Round 0. 00:05:25.030 Shutdown signal received, stop current app iteration 00:05:25.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:25.030 spdk_app_start is called in Round 1. 00:05:25.030 Shutdown signal received, stop current app iteration 00:05:25.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:25.030 spdk_app_start is called in Round 2. 00:05:25.030 Shutdown signal received, stop current app iteration 00:05:25.030 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:25.030 spdk_app_start is called in Round 3. 00:05:25.030 Shutdown signal received, stop current app iteration 00:05:25.030 08:56:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.030 08:56:01 -- event/event.sh@42 -- # return 0 00:05:25.030 00:05:25.030 real 0m18.927s 00:05:25.030 user 0m42.989s 00:05:25.030 sys 0m2.516s 00:05:25.030 08:56:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.030 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.030 ************************************ 00:05:25.030 END TEST app_repeat 00:05:25.030 ************************************ 00:05:25.030 08:56:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.030 08:56:01 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.030 08:56:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.030 08:56:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.030 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.030 ************************************ 00:05:25.030 START TEST cpu_locks 00:05:25.030 ************************************ 00:05:25.030 08:56:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.030 * Looking for test storage... 00:05:25.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.030 08:56:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.030 08:56:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.030 08:56:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.030 08:56:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.030 08:56:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.030 08:56:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.030 08:56:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.030 08:56:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.030 08:56:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.030 08:56:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.030 08:56:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.289 08:56:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.289 08:56:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.289 08:56:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.289 08:56:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.289 08:56:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.289 08:56:01 -- scripts/common.sh@344 -- # : 1 00:05:25.289 08:56:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.289 08:56:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.289 08:56:01 -- scripts/common.sh@364 -- # decimal 1 00:05:25.289 08:56:01 -- scripts/common.sh@352 -- # local d=1 00:05:25.289 08:56:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.289 08:56:01 -- scripts/common.sh@354 -- # echo 1 00:05:25.289 08:56:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.289 08:56:01 -- scripts/common.sh@365 -- # decimal 2 00:05:25.289 08:56:01 -- scripts/common.sh@352 -- # local d=2 00:05:25.289 08:56:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.289 08:56:01 -- scripts/common.sh@354 -- # echo 2 00:05:25.289 08:56:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.289 08:56:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.289 08:56:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.289 08:56:01 -- scripts/common.sh@367 -- # return 0 00:05:25.289 08:56:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.289 08:56:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.289 --rc genhtml_branch_coverage=1 00:05:25.289 --rc genhtml_function_coverage=1 00:05:25.289 --rc genhtml_legend=1 00:05:25.289 --rc geninfo_all_blocks=1 00:05:25.289 --rc geninfo_unexecuted_blocks=1 00:05:25.289 00:05:25.289 ' 00:05:25.289 08:56:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.289 --rc genhtml_branch_coverage=1 00:05:25.289 --rc genhtml_function_coverage=1 00:05:25.289 --rc genhtml_legend=1 00:05:25.289 --rc geninfo_all_blocks=1 00:05:25.289 --rc geninfo_unexecuted_blocks=1 00:05:25.289 00:05:25.289 ' 00:05:25.289 08:56:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.289 --rc genhtml_branch_coverage=1 00:05:25.289 --rc genhtml_function_coverage=1 00:05:25.289 --rc genhtml_legend=1 00:05:25.289 --rc geninfo_all_blocks=1 00:05:25.289 --rc geninfo_unexecuted_blocks=1 00:05:25.289 00:05:25.289 ' 00:05:25.289 08:56:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.289 --rc genhtml_branch_coverage=1 00:05:25.289 --rc genhtml_function_coverage=1 00:05:25.289 --rc genhtml_legend=1 00:05:25.289 --rc geninfo_all_blocks=1 00:05:25.289 --rc geninfo_unexecuted_blocks=1 00:05:25.289 00:05:25.289 ' 00:05:25.289 08:56:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.289 08:56:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.289 08:56:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.289 08:56:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.289 08:56:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.289 08:56:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.289 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.289 ************************************ 00:05:25.289 START TEST default_locks 00:05:25.289 ************************************ 00:05:25.289 08:56:01 -- common/autotest_common.sh@1114 -- # default_locks 00:05:25.289 08:56:01 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55410 00:05:25.289 08:56:01 -- event/cpu_locks.sh@47 -- # waitforlisten 55410 00:05:25.289 08:56:01 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.289 08:56:01 -- common/autotest_common.sh@829 -- # '[' -z 55410 ']' 00:05:25.289 08:56:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.289 08:56:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.289 08:56:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.289 08:56:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.289 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:25.289 [2024-11-17 08:56:02.041926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.289 [2024-11-17 08:56:02.042041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55410 ] 00:05:25.289 [2024-11-17 08:56:02.176729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.548 [2024-11-17 08:56:02.233381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.548 [2024-11-17 08:56:02.233590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.115 08:56:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.115 08:56:03 -- common/autotest_common.sh@862 -- # return 0 00:05:26.115 08:56:03 -- event/cpu_locks.sh@49 -- # locks_exist 55410 00:05:26.115 08:56:03 -- event/cpu_locks.sh@22 -- # lslocks -p 55410 00:05:26.115 08:56:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.684 08:56:03 -- event/cpu_locks.sh@50 -- # killprocess 55410 00:05:26.684 08:56:03 -- common/autotest_common.sh@936 -- # '[' -z 55410 ']' 00:05:26.684 08:56:03 -- common/autotest_common.sh@940 -- # kill -0 55410 00:05:26.684 08:56:03 -- common/autotest_common.sh@941 -- # uname 00:05:26.684 08:56:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.684 08:56:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55410 00:05:26.684 08:56:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.684 08:56:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.684 killing process with pid 55410 00:05:26.684 08:56:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55410' 00:05:26.684 08:56:03 -- common/autotest_common.sh@955 -- # kill 55410 00:05:26.684 08:56:03 -- common/autotest_common.sh@960 -- # wait 55410 00:05:26.943 08:56:03 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55410 00:05:26.943 08:56:03 -- common/autotest_common.sh@650 -- # local es=0 00:05:26.943 08:56:03 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55410 00:05:26.943 08:56:03 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.943 08:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.943 08:56:03 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.943 08:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.943 08:56:03 -- common/autotest_common.sh@653 -- # waitforlisten 55410 00:05:26.943 08:56:03 -- common/autotest_common.sh@829 -- # '[' -z 55410 ']' 00:05:26.943 08:56:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.943 08:56:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.943 08:56:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.943 08:56:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.943 08:56:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55410) - No such process 00:05:26.943 ERROR: process (pid: 55410) is no longer running 00:05:26.943 08:56:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.943 08:56:03 -- common/autotest_common.sh@862 -- # return 1 00:05:26.943 08:56:03 -- common/autotest_common.sh@653 -- # es=1 00:05:26.943 08:56:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.943 08:56:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.943 08:56:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.943 08:56:03 -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.943 08:56:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.943 08:56:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.943 08:56:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.943 00:05:26.943 real 0m1.747s 00:05:26.943 user 0m2.041s 00:05:26.943 sys 0m0.442s 00:05:26.943 08:56:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.943 ************************************ 00:05:26.943 END TEST default_locks 00:05:26.943 08:56:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.943 ************************************ 00:05:26.943 08:56:03 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.943 08:56:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.943 08:56:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.943 08:56:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.943 ************************************ 00:05:26.943 START TEST default_locks_via_rpc 00:05:26.943 ************************************ 00:05:26.943 08:56:03 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:26.943 08:56:03 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55457 00:05:26.943 08:56:03 -- event/cpu_locks.sh@63 -- # waitforlisten 55457 00:05:26.943 08:56:03 -- common/autotest_common.sh@829 -- # '[' -z 55457 ']' 00:05:26.943 08:56:03 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.943 08:56:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.943 08:56:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.943 08:56:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.943 08:56:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.943 08:56:03 -- common/autotest_common.sh@10 -- # set +x 00:05:26.943 [2024-11-17 08:56:03.833501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.943 [2024-11-17 08:56:03.833608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55457 ] 00:05:27.203 [2024-11-17 08:56:03.971877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.203 [2024-11-17 08:56:04.028388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.203 [2024-11-17 08:56:04.028540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.141 08:56:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.141 08:56:04 -- common/autotest_common.sh@862 -- # return 0 00:05:28.141 08:56:04 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.141 08:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.141 08:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:28.141 08:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.141 08:56:04 -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.141 08:56:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.141 08:56:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.141 08:56:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.141 08:56:04 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.141 08:56:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.141 08:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:28.141 08:56:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.141 08:56:04 -- event/cpu_locks.sh@71 -- # locks_exist 55457 00:05:28.141 08:56:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.141 08:56:04 -- event/cpu_locks.sh@22 -- # lslocks -p 55457 00:05:28.400 08:56:05 -- event/cpu_locks.sh@73 -- # killprocess 55457 00:05:28.400 08:56:05 -- common/autotest_common.sh@936 -- # '[' -z 55457 ']' 00:05:28.400 08:56:05 -- common/autotest_common.sh@940 -- # kill -0 55457 00:05:28.400 08:56:05 -- common/autotest_common.sh@941 -- # uname 00:05:28.400 08:56:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.400 08:56:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55457 00:05:28.400 08:56:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.400 killing process with pid 55457 00:05:28.400 08:56:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.400 08:56:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55457' 00:05:28.400 08:56:05 -- common/autotest_common.sh@955 -- # kill 55457 00:05:28.400 08:56:05 -- common/autotest_common.sh@960 -- # wait 55457 00:05:28.659 00:05:28.659 real 0m1.775s 00:05:28.659 user 0m2.050s 00:05:28.659 sys 0m0.459s 00:05:28.659 08:56:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.659 08:56:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.659 ************************************ 00:05:28.659 END TEST default_locks_via_rpc 00:05:28.659 ************************************ 00:05:28.918 08:56:05 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.918 08:56:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.918 08:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.918 08:56:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.919 ************************************ 00:05:28.919 START TEST non_locking_app_on_locked_coremask 00:05:28.919 ************************************ 00:05:28.919 08:56:05 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:28.919 08:56:05 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55508 00:05:28.919 08:56:05 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.919 08:56:05 -- event/cpu_locks.sh@81 -- # waitforlisten 55508 /var/tmp/spdk.sock 00:05:28.919 08:56:05 -- common/autotest_common.sh@829 -- # '[' -z 55508 ']' 00:05:28.919 08:56:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.919 08:56:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.919 08:56:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.919 08:56:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.919 08:56:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.919 [2024-11-17 08:56:05.664343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.919 [2024-11-17 08:56:05.664434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55508 ] 00:05:28.919 [2024-11-17 08:56:05.802057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.178 [2024-11-17 08:56:05.853917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.178 [2024-11-17 08:56:05.854098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.745 08:56:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.745 08:56:06 -- common/autotest_common.sh@862 -- # return 0 00:05:29.745 08:56:06 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:29.745 08:56:06 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55524 00:05:29.745 08:56:06 -- event/cpu_locks.sh@85 -- # waitforlisten 55524 /var/tmp/spdk2.sock 00:05:29.745 08:56:06 -- common/autotest_common.sh@829 -- # '[' -z 55524 ']' 00:05:29.745 08:56:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.745 08:56:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.745 08:56:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.745 08:56:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.745 08:56:06 -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 [2024-11-17 08:56:06.701089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.004 [2024-11-17 08:56:06.701159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55524 ] 00:05:30.004 [2024-11-17 08:56:06.833647] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.004 [2024-11-17 08:56:06.833684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.263 [2024-11-17 08:56:06.939586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.263 [2024-11-17 08:56:06.939776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.831 08:56:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.831 08:56:07 -- common/autotest_common.sh@862 -- # return 0 00:05:30.831 08:56:07 -- event/cpu_locks.sh@87 -- # locks_exist 55508 00:05:30.831 08:56:07 -- event/cpu_locks.sh@22 -- # lslocks -p 55508 00:05:30.831 08:56:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.767 08:56:08 -- event/cpu_locks.sh@89 -- # killprocess 55508 00:05:31.767 08:56:08 -- common/autotest_common.sh@936 -- # '[' -z 55508 ']' 00:05:31.767 08:56:08 -- common/autotest_common.sh@940 -- # kill -0 55508 00:05:31.767 08:56:08 -- common/autotest_common.sh@941 -- # uname 00:05:31.767 08:56:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.767 08:56:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55508 00:05:31.767 08:56:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.767 08:56:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.767 killing process with pid 55508 00:05:31.767 08:56:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55508' 00:05:31.767 08:56:08 -- common/autotest_common.sh@955 -- # kill 55508 00:05:31.767 08:56:08 -- common/autotest_common.sh@960 -- # wait 55508 00:05:32.335 08:56:09 -- event/cpu_locks.sh@90 -- # killprocess 55524 00:05:32.335 08:56:09 -- common/autotest_common.sh@936 -- # '[' -z 55524 ']' 00:05:32.335 08:56:09 -- common/autotest_common.sh@940 -- # kill -0 55524 00:05:32.335 08:56:09 -- common/autotest_common.sh@941 -- # uname 00:05:32.335 08:56:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.335 08:56:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55524 00:05:32.335 08:56:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.335 08:56:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.335 killing process with pid 55524 00:05:32.335 08:56:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55524' 00:05:32.335 08:56:09 -- common/autotest_common.sh@955 -- # kill 55524 00:05:32.335 08:56:09 -- common/autotest_common.sh@960 -- # wait 55524 00:05:32.594 00:05:32.594 real 0m3.714s 00:05:32.594 user 0m4.443s 00:05:32.594 sys 0m0.861s 00:05:32.594 08:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.594 08:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.594 ************************************ 00:05:32.594 END TEST non_locking_app_on_locked_coremask 00:05:32.594 ************************************ 00:05:32.594 08:56:09 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:32.594 08:56:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.594 08:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.594 08:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.594 ************************************ 00:05:32.594 START TEST locking_app_on_unlocked_coremask 00:05:32.594 ************************************ 00:05:32.594 08:56:09 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:32.594 08:56:09 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55585 00:05:32.594 08:56:09 -- event/cpu_locks.sh@99 -- # waitforlisten 55585 /var/tmp/spdk.sock 00:05:32.594 08:56:09 -- common/autotest_common.sh@829 -- # '[' -z 55585 ']' 00:05:32.594 08:56:09 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:32.594 08:56:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.594 08:56:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.594 08:56:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.594 08:56:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.594 08:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.594 [2024-11-17 08:56:09.460720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.594 [2024-11-17 08:56:09.460861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55585 ] 00:05:32.853 [2024-11-17 08:56:09.609302] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.853 [2024-11-17 08:56:09.609355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.853 [2024-11-17 08:56:09.659081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.853 [2024-11-17 08:56:09.659237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.789 08:56:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.789 08:56:10 -- common/autotest_common.sh@862 -- # return 0 00:05:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.789 08:56:10 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55601 00:05:33.789 08:56:10 -- event/cpu_locks.sh@103 -- # waitforlisten 55601 /var/tmp/spdk2.sock 00:05:33.789 08:56:10 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.789 08:56:10 -- common/autotest_common.sh@829 -- # '[' -z 55601 ']' 00:05:33.789 08:56:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.789 08:56:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.789 08:56:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.789 08:56:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.789 08:56:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.789 [2024-11-17 08:56:10.514381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.789 [2024-11-17 08:56:10.514707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55601 ] 00:05:33.789 [2024-11-17 08:56:10.655567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.048 [2024-11-17 08:56:10.752714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.048 [2024-11-17 08:56:10.752868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.615 08:56:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.615 08:56:11 -- common/autotest_common.sh@862 -- # return 0 00:05:34.615 08:56:11 -- event/cpu_locks.sh@105 -- # locks_exist 55601 00:05:34.615 08:56:11 -- event/cpu_locks.sh@22 -- # lslocks -p 55601 00:05:34.615 08:56:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.574 08:56:12 -- event/cpu_locks.sh@107 -- # killprocess 55585 00:05:35.574 08:56:12 -- common/autotest_common.sh@936 -- # '[' -z 55585 ']' 00:05:35.574 08:56:12 -- common/autotest_common.sh@940 -- # kill -0 55585 00:05:35.574 08:56:12 -- common/autotest_common.sh@941 -- # uname 00:05:35.574 08:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.574 08:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55585 00:05:35.574 killing process with pid 55585 00:05:35.574 08:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.574 08:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.574 08:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55585' 00:05:35.574 08:56:12 -- common/autotest_common.sh@955 -- # kill 55585 00:05:35.574 08:56:12 -- common/autotest_common.sh@960 -- # wait 55585 00:05:35.860 08:56:12 -- event/cpu_locks.sh@108 -- # killprocess 55601 00:05:35.860 08:56:12 -- common/autotest_common.sh@936 -- # '[' -z 55601 ']' 00:05:35.860 08:56:12 -- common/autotest_common.sh@940 -- # kill -0 55601 00:05:35.860 08:56:12 -- common/autotest_common.sh@941 -- # uname 00:05:35.860 08:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.860 08:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55601 00:05:36.131 08:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.131 killing process with pid 55601 00:05:36.131 08:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.131 08:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55601' 00:05:36.131 08:56:12 -- common/autotest_common.sh@955 -- # kill 55601 00:05:36.131 08:56:12 -- common/autotest_common.sh@960 -- # wait 55601 00:05:36.402 ************************************ 00:05:36.402 END TEST locking_app_on_unlocked_coremask 00:05:36.402 ************************************ 00:05:36.402 00:05:36.402 real 0m3.683s 00:05:36.402 user 0m4.390s 00:05:36.402 sys 0m0.877s 00:05:36.402 08:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.402 08:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.402 08:56:13 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.402 08:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.402 08:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.402 08:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.402 ************************************ 00:05:36.402 START TEST locking_app_on_locked_coremask 00:05:36.402 ************************************ 00:05:36.402 08:56:13 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:36.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.403 08:56:13 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55663 00:05:36.403 08:56:13 -- event/cpu_locks.sh@116 -- # waitforlisten 55663 /var/tmp/spdk.sock 00:05:36.403 08:56:13 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.403 08:56:13 -- common/autotest_common.sh@829 -- # '[' -z 55663 ']' 00:05:36.403 08:56:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.403 08:56:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.403 08:56:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.403 08:56:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.403 08:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.403 [2024-11-17 08:56:13.170588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.403 [2024-11-17 08:56:13.170711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55663 ] 00:05:36.403 [2024-11-17 08:56:13.307109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.662 [2024-11-17 08:56:13.357400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.662 [2024-11-17 08:56:13.357587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.599 08:56:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.599 08:56:14 -- common/autotest_common.sh@862 -- # return 0 00:05:37.599 08:56:14 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55679 00:05:37.599 08:56:14 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.599 08:56:14 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55679 /var/tmp/spdk2.sock 00:05:37.599 08:56:14 -- common/autotest_common.sh@650 -- # local es=0 00:05:37.599 08:56:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55679 /var/tmp/spdk2.sock 00:05:37.599 08:56:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.599 08:56:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.599 08:56:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.599 08:56:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.599 08:56:14 -- common/autotest_common.sh@653 -- # waitforlisten 55679 /var/tmp/spdk2.sock 00:05:37.599 08:56:14 -- common/autotest_common.sh@829 -- # '[' -z 55679 ']' 00:05:37.599 08:56:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.599 08:56:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.599 08:56:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.599 08:56:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.599 08:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.599 [2024-11-17 08:56:14.255511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.599 [2024-11-17 08:56:14.255851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55679 ] 00:05:37.599 [2024-11-17 08:56:14.392666] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55663 has claimed it. 00:05:37.599 [2024-11-17 08:56:14.392738] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.167 ERROR: process (pid: 55679) is no longer running 00:05:38.167 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55679) - No such process 00:05:38.167 08:56:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.167 08:56:14 -- common/autotest_common.sh@862 -- # return 1 00:05:38.167 08:56:14 -- common/autotest_common.sh@653 -- # es=1 00:05:38.167 08:56:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.167 08:56:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.167 08:56:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.167 08:56:14 -- event/cpu_locks.sh@122 -- # locks_exist 55663 00:05:38.167 08:56:14 -- event/cpu_locks.sh@22 -- # lslocks -p 55663 00:05:38.167 08:56:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.735 08:56:15 -- event/cpu_locks.sh@124 -- # killprocess 55663 00:05:38.735 08:56:15 -- common/autotest_common.sh@936 -- # '[' -z 55663 ']' 00:05:38.735 08:56:15 -- common/autotest_common.sh@940 -- # kill -0 55663 00:05:38.735 08:56:15 -- common/autotest_common.sh@941 -- # uname 00:05:38.735 08:56:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.735 08:56:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55663 00:05:38.735 killing process with pid 55663 00:05:38.735 08:56:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.735 08:56:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.735 08:56:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55663' 00:05:38.735 08:56:15 -- common/autotest_common.sh@955 -- # kill 55663 00:05:38.735 08:56:15 -- common/autotest_common.sh@960 -- # wait 55663 00:05:38.995 00:05:38.995 real 0m2.566s 00:05:38.995 user 0m3.155s 00:05:38.995 sys 0m0.516s 00:05:38.995 08:56:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.995 08:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.995 ************************************ 00:05:38.995 END TEST locking_app_on_locked_coremask 00:05:38.995 ************************************ 00:05:38.995 08:56:15 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:38.995 08:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.995 08:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.995 08:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.995 ************************************ 00:05:38.995 START TEST locking_overlapped_coremask 00:05:38.995 ************************************ 00:05:38.995 08:56:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:38.995 08:56:15 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55730 00:05:38.995 08:56:15 -- event/cpu_locks.sh@133 -- # waitforlisten 55730 /var/tmp/spdk.sock 00:05:38.995 08:56:15 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:38.995 08:56:15 -- common/autotest_common.sh@829 -- # '[' -z 55730 ']' 00:05:38.995 08:56:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.995 08:56:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.995 08:56:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.995 08:56:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.995 08:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.995 [2024-11-17 08:56:15.785271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.995 [2024-11-17 08:56:15.785372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55730 ] 00:05:39.254 [2024-11-17 08:56:15.925560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.254 [2024-11-17 08:56:15.995135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.254 [2024-11-17 08:56:15.995454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.254 [2024-11-17 08:56:15.995613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.254 [2024-11-17 08:56:15.995623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.190 08:56:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.190 08:56:16 -- common/autotest_common.sh@862 -- # return 0 00:05:40.190 08:56:16 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55748 00:05:40.190 08:56:16 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55748 /var/tmp/spdk2.sock 00:05:40.190 08:56:16 -- common/autotest_common.sh@650 -- # local es=0 00:05:40.191 08:56:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55748 /var/tmp/spdk2.sock 00:05:40.191 08:56:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:40.191 08:56:16 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:40.191 08:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.191 08:56:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:40.191 08:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.191 08:56:16 -- common/autotest_common.sh@653 -- # waitforlisten 55748 /var/tmp/spdk2.sock 00:05:40.191 08:56:16 -- common/autotest_common.sh@829 -- # '[' -z 55748 ']' 00:05:40.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.191 08:56:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.191 08:56:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.191 08:56:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.191 08:56:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.191 08:56:16 -- common/autotest_common.sh@10 -- # set +x 00:05:40.191 [2024-11-17 08:56:16.851861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.191 [2024-11-17 08:56:16.851962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55748 ] 00:05:40.191 [2024-11-17 08:56:16.992472] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55730 has claimed it. 00:05:40.191 [2024-11-17 08:56:16.992537] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.758 ERROR: process (pid: 55748) is no longer running 00:05:40.758 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55748) - No such process 00:05:40.758 08:56:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.758 08:56:17 -- common/autotest_common.sh@862 -- # return 1 00:05:40.758 08:56:17 -- common/autotest_common.sh@653 -- # es=1 00:05:40.758 08:56:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.758 08:56:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.758 08:56:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.758 08:56:17 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.758 08:56:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.758 08:56:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.758 08:56:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.759 08:56:17 -- event/cpu_locks.sh@141 -- # killprocess 55730 00:05:40.759 08:56:17 -- common/autotest_common.sh@936 -- # '[' -z 55730 ']' 00:05:40.759 08:56:17 -- common/autotest_common.sh@940 -- # kill -0 55730 00:05:40.759 08:56:17 -- common/autotest_common.sh@941 -- # uname 00:05:40.759 08:56:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.759 08:56:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55730 00:05:40.759 killing process with pid 55730 00:05:40.759 08:56:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.759 08:56:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.759 08:56:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55730' 00:05:40.759 08:56:17 -- common/autotest_common.sh@955 -- # kill 55730 00:05:40.759 08:56:17 -- common/autotest_common.sh@960 -- # wait 55730 00:05:41.018 00:05:41.018 real 0m2.083s 00:05:41.018 user 0m5.957s 00:05:41.018 sys 0m0.310s 00:05:41.018 ************************************ 00:05:41.018 END TEST locking_overlapped_coremask 00:05:41.018 ************************************ 00:05:41.018 08:56:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.018 08:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:41.018 08:56:17 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:41.018 08:56:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.018 08:56:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.018 08:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:41.018 ************************************ 00:05:41.018 START TEST locking_overlapped_coremask_via_rpc 00:05:41.018 ************************************ 00:05:41.018 08:56:17 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:41.018 08:56:17 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55788 00:05:41.018 08:56:17 -- event/cpu_locks.sh@149 -- # waitforlisten 55788 /var/tmp/spdk.sock 00:05:41.018 08:56:17 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:41.018 08:56:17 -- common/autotest_common.sh@829 -- # '[' -z 55788 ']' 00:05:41.018 08:56:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.018 08:56:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.018 08:56:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.018 08:56:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.018 08:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:41.018 [2024-11-17 08:56:17.909540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.018 [2024-11-17 08:56:17.909649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55788 ] 00:05:41.277 [2024-11-17 08:56:18.043562] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.277 [2024-11-17 08:56:18.043616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.277 [2024-11-17 08:56:18.096817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.277 [2024-11-17 08:56:18.097085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.277 [2024-11-17 08:56:18.097196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.277 [2024-11-17 08:56:18.097218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.215 08:56:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.215 08:56:18 -- common/autotest_common.sh@862 -- # return 0 00:05:42.215 08:56:18 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.215 08:56:18 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55806 00:05:42.215 08:56:18 -- event/cpu_locks.sh@153 -- # waitforlisten 55806 /var/tmp/spdk2.sock 00:05:42.215 08:56:18 -- common/autotest_common.sh@829 -- # '[' -z 55806 ']' 00:05:42.215 08:56:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.215 08:56:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.215 08:56:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.215 08:56:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.215 08:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:42.215 [2024-11-17 08:56:18.871149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.215 [2024-11-17 08:56:18.871260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55806 ] 00:05:42.215 [2024-11-17 08:56:19.013338] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.215 [2024-11-17 08:56:19.013394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.215 [2024-11-17 08:56:19.123491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.215 [2024-11-17 08:56:19.124380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.215 [2024-11-17 08:56:19.124513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.215 [2024-11-17 08:56:19.124510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:43.154 08:56:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.154 08:56:19 -- common/autotest_common.sh@862 -- # return 0 00:05:43.154 08:56:19 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.154 08:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.154 08:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 08:56:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.154 08:56:19 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.154 08:56:19 -- common/autotest_common.sh@650 -- # local es=0 00:05:43.155 08:56:19 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.155 08:56:19 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:43.155 08:56:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.155 08:56:19 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:43.155 08:56:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.155 08:56:19 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.155 08:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.155 08:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:43.155 [2024-11-17 08:56:19.850779] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55788 has claimed it. 00:05:43.155 request: 00:05:43.155 { 00:05:43.155 "method": "framework_enable_cpumask_locks", 00:05:43.155 "req_id": 1 00:05:43.155 } 00:05:43.155 Got JSON-RPC error response 00:05:43.155 response: 00:05:43.155 { 00:05:43.155 "code": -32603, 00:05:43.155 "message": "Failed to claim CPU core: 2" 00:05:43.155 } 00:05:43.155 08:56:19 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:43.155 08:56:19 -- common/autotest_common.sh@653 -- # es=1 00:05:43.155 08:56:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.155 08:56:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.155 08:56:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.155 08:56:19 -- event/cpu_locks.sh@158 -- # waitforlisten 55788 /var/tmp/spdk.sock 00:05:43.155 08:56:19 -- common/autotest_common.sh@829 -- # '[' -z 55788 ']' 00:05:43.155 08:56:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.155 08:56:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.155 08:56:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.155 08:56:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.155 08:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:43.413 08:56:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.413 08:56:20 -- common/autotest_common.sh@862 -- # return 0 00:05:43.413 08:56:20 -- event/cpu_locks.sh@159 -- # waitforlisten 55806 /var/tmp/spdk2.sock 00:05:43.413 08:56:20 -- common/autotest_common.sh@829 -- # '[' -z 55806 ']' 00:05:43.413 08:56:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.413 08:56:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.413 08:56:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.413 08:56:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.413 08:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.673 08:56:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.673 08:56:20 -- common/autotest_common.sh@862 -- # return 0 00:05:43.673 08:56:20 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:43.673 ************************************ 00:05:43.673 END TEST locking_overlapped_coremask_via_rpc 00:05:43.673 ************************************ 00:05:43.673 08:56:20 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.673 08:56:20 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.673 08:56:20 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.673 00:05:43.673 real 0m2.524s 00:05:43.673 user 0m1.277s 00:05:43.673 sys 0m0.171s 00:05:43.673 08:56:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.673 08:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.673 08:56:20 -- event/cpu_locks.sh@174 -- # cleanup 00:05:43.673 08:56:20 -- event/cpu_locks.sh@15 -- # [[ -z 55788 ]] 00:05:43.673 08:56:20 -- event/cpu_locks.sh@15 -- # killprocess 55788 00:05:43.673 08:56:20 -- common/autotest_common.sh@936 -- # '[' -z 55788 ']' 00:05:43.673 08:56:20 -- common/autotest_common.sh@940 -- # kill -0 55788 00:05:43.673 08:56:20 -- common/autotest_common.sh@941 -- # uname 00:05:43.673 08:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.673 08:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55788 00:05:43.673 killing process with pid 55788 00:05:43.673 08:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.673 08:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.673 08:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55788' 00:05:43.673 08:56:20 -- common/autotest_common.sh@955 -- # kill 55788 00:05:43.673 08:56:20 -- common/autotest_common.sh@960 -- # wait 55788 00:05:43.933 08:56:20 -- event/cpu_locks.sh@16 -- # [[ -z 55806 ]] 00:05:43.933 08:56:20 -- event/cpu_locks.sh@16 -- # killprocess 55806 00:05:43.933 08:56:20 -- common/autotest_common.sh@936 -- # '[' -z 55806 ']' 00:05:43.933 08:56:20 -- common/autotest_common.sh@940 -- # kill -0 55806 00:05:43.933 08:56:20 -- common/autotest_common.sh@941 -- # uname 00:05:43.933 08:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.933 08:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55806 00:05:43.933 killing process with pid 55806 00:05:43.933 08:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:43.933 08:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:43.933 08:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55806' 00:05:43.933 08:56:20 -- common/autotest_common.sh@955 -- # kill 55806 00:05:43.933 08:56:20 -- common/autotest_common.sh@960 -- # wait 55806 00:05:44.192 08:56:21 -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.192 08:56:21 -- event/cpu_locks.sh@1 -- # cleanup 00:05:44.192 08:56:21 -- event/cpu_locks.sh@15 -- # [[ -z 55788 ]] 00:05:44.192 08:56:21 -- event/cpu_locks.sh@15 -- # killprocess 55788 00:05:44.192 08:56:21 -- common/autotest_common.sh@936 -- # '[' -z 55788 ']' 00:05:44.192 08:56:21 -- common/autotest_common.sh@940 -- # kill -0 55788 00:05:44.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55788) - No such process 00:05:44.192 Process with pid 55788 is not found 00:05:44.192 08:56:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55788 is not found' 00:05:44.192 08:56:21 -- event/cpu_locks.sh@16 -- # [[ -z 55806 ]] 00:05:44.192 08:56:21 -- event/cpu_locks.sh@16 -- # killprocess 55806 00:05:44.192 08:56:21 -- common/autotest_common.sh@936 -- # '[' -z 55806 ']' 00:05:44.192 08:56:21 -- common/autotest_common.sh@940 -- # kill -0 55806 00:05:44.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55806) - No such process 00:05:44.192 Process with pid 55806 is not found 00:05:44.192 08:56:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55806 is not found' 00:05:44.192 08:56:21 -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.192 ************************************ 00:05:44.192 END TEST cpu_locks 00:05:44.192 ************************************ 00:05:44.192 00:05:44.192 real 0m19.234s 00:05:44.192 user 0m35.021s 00:05:44.192 sys 0m4.272s 00:05:44.192 08:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.192 08:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.192 00:05:44.192 real 0m46.449s 00:05:44.192 user 1m30.396s 00:05:44.192 sys 0m7.493s 00:05:44.192 08:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.192 08:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.192 ************************************ 00:05:44.192 END TEST event 00:05:44.192 ************************************ 00:05:44.453 08:56:21 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.453 08:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.453 08:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.453 ************************************ 00:05:44.453 START TEST thread 00:05:44.453 ************************************ 00:05:44.453 08:56:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.453 * Looking for test storage... 00:05:44.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:44.453 08:56:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.453 08:56:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.453 08:56:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.453 08:56:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.453 08:56:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.453 08:56:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.453 08:56:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.453 08:56:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.453 08:56:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.453 08:56:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.453 08:56:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.453 08:56:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.453 08:56:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.453 08:56:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.453 08:56:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.453 08:56:21 -- scripts/common.sh@344 -- # : 1 00:05:44.453 08:56:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.453 08:56:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.453 08:56:21 -- scripts/common.sh@364 -- # decimal 1 00:05:44.453 08:56:21 -- scripts/common.sh@352 -- # local d=1 00:05:44.453 08:56:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.453 08:56:21 -- scripts/common.sh@354 -- # echo 1 00:05:44.453 08:56:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.453 08:56:21 -- scripts/common.sh@365 -- # decimal 2 00:05:44.453 08:56:21 -- scripts/common.sh@352 -- # local d=2 00:05:44.453 08:56:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.453 08:56:21 -- scripts/common.sh@354 -- # echo 2 00:05:44.453 08:56:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.453 08:56:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.453 08:56:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.453 08:56:21 -- scripts/common.sh@367 -- # return 0 00:05:44.453 08:56:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.453 --rc genhtml_branch_coverage=1 00:05:44.453 --rc genhtml_function_coverage=1 00:05:44.453 --rc genhtml_legend=1 00:05:44.453 --rc geninfo_all_blocks=1 00:05:44.453 --rc geninfo_unexecuted_blocks=1 00:05:44.453 00:05:44.453 ' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.453 --rc genhtml_branch_coverage=1 00:05:44.453 --rc genhtml_function_coverage=1 00:05:44.453 --rc genhtml_legend=1 00:05:44.453 --rc geninfo_all_blocks=1 00:05:44.453 --rc geninfo_unexecuted_blocks=1 00:05:44.453 00:05:44.453 ' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.453 --rc genhtml_branch_coverage=1 00:05:44.453 --rc genhtml_function_coverage=1 00:05:44.453 --rc genhtml_legend=1 00:05:44.453 --rc geninfo_all_blocks=1 00:05:44.453 --rc geninfo_unexecuted_blocks=1 00:05:44.453 00:05:44.453 ' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.453 --rc genhtml_branch_coverage=1 00:05:44.453 --rc genhtml_function_coverage=1 00:05:44.453 --rc genhtml_legend=1 00:05:44.453 --rc geninfo_all_blocks=1 00:05:44.453 --rc geninfo_unexecuted_blocks=1 00:05:44.453 00:05:44.453 ' 00:05:44.453 08:56:21 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.453 08:56:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:44.453 08:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.453 08:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.453 ************************************ 00:05:44.453 START TEST thread_poller_perf 00:05:44.453 ************************************ 00:05:44.453 08:56:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.453 [2024-11-17 08:56:21.312047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.453 [2024-11-17 08:56:21.312147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55930 ] 00:05:44.712 [2024-11-17 08:56:21.444366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.712 [2024-11-17 08:56:21.491494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.712 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.650 [2024-11-17T08:56:22.580Z] ====================================== 00:05:45.650 [2024-11-17T08:56:22.580Z] busy:2209446664 (cyc) 00:05:45.650 [2024-11-17T08:56:22.580Z] total_run_count: 353000 00:05:45.650 [2024-11-17T08:56:22.580Z] tsc_hz: 2200000000 (cyc) 00:05:45.650 [2024-11-17T08:56:22.580Z] ====================================== 00:05:45.650 [2024-11-17T08:56:22.580Z] poller_cost: 6259 (cyc), 2845 (nsec) 00:05:45.909 00:05:45.909 real 0m1.281s 00:05:45.910 user 0m1.139s 00:05:45.910 sys 0m0.035s 00:05:45.910 08:56:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.910 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 ************************************ 00:05:45.910 END TEST thread_poller_perf 00:05:45.910 ************************************ 00:05:45.910 08:56:22 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.910 08:56:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:45.910 08:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.910 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 ************************************ 00:05:45.910 START TEST thread_poller_perf 00:05:45.910 ************************************ 00:05:45.910 08:56:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.910 [2024-11-17 08:56:22.653893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.910 [2024-11-17 08:56:22.654008] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55960 ] 00:05:45.910 [2024-11-17 08:56:22.786683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.168 [2024-11-17 08:56:22.836528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.169 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:47.106 [2024-11-17T08:56:24.036Z] ====================================== 00:05:47.106 [2024-11-17T08:56:24.036Z] busy:2202639754 (cyc) 00:05:47.106 [2024-11-17T08:56:24.036Z] total_run_count: 4926000 00:05:47.106 [2024-11-17T08:56:24.036Z] tsc_hz: 2200000000 (cyc) 00:05:47.106 [2024-11-17T08:56:24.036Z] ====================================== 00:05:47.106 [2024-11-17T08:56:24.036Z] poller_cost: 447 (cyc), 203 (nsec) 00:05:47.106 00:05:47.106 real 0m1.280s 00:05:47.106 user 0m1.134s 00:05:47.106 sys 0m0.040s 00:05:47.106 08:56:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.106 08:56:23 -- common/autotest_common.sh@10 -- # set +x 00:05:47.106 ************************************ 00:05:47.106 END TEST thread_poller_perf 00:05:47.106 ************************************ 00:05:47.106 08:56:23 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:47.106 00:05:47.106 real 0m2.827s 00:05:47.106 user 0m2.401s 00:05:47.106 sys 0m0.213s 00:05:47.106 08:56:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.106 08:56:23 -- common/autotest_common.sh@10 -- # set +x 00:05:47.106 ************************************ 00:05:47.106 END TEST thread 00:05:47.106 ************************************ 00:05:47.106 08:56:23 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:47.106 08:56:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.106 08:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.106 08:56:23 -- common/autotest_common.sh@10 -- # set +x 00:05:47.106 ************************************ 00:05:47.106 START TEST accel 00:05:47.106 ************************************ 00:05:47.106 08:56:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:47.366 * Looking for test storage... 00:05:47.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:47.366 08:56:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.366 08:56:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.366 08:56:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.366 08:56:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.366 08:56:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.366 08:56:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.366 08:56:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.366 08:56:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.366 08:56:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.366 08:56:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.366 08:56:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.366 08:56:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.366 08:56:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.366 08:56:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.366 08:56:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.366 08:56:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.366 08:56:24 -- scripts/common.sh@344 -- # : 1 00:05:47.366 08:56:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.366 08:56:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.366 08:56:24 -- scripts/common.sh@364 -- # decimal 1 00:05:47.366 08:56:24 -- scripts/common.sh@352 -- # local d=1 00:05:47.366 08:56:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.366 08:56:24 -- scripts/common.sh@354 -- # echo 1 00:05:47.366 08:56:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.366 08:56:24 -- scripts/common.sh@365 -- # decimal 2 00:05:47.366 08:56:24 -- scripts/common.sh@352 -- # local d=2 00:05:47.366 08:56:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.366 08:56:24 -- scripts/common.sh@354 -- # echo 2 00:05:47.366 08:56:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.366 08:56:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.366 08:56:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.366 08:56:24 -- scripts/common.sh@367 -- # return 0 00:05:47.366 08:56:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.366 08:56:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.366 --rc genhtml_branch_coverage=1 00:05:47.366 --rc genhtml_function_coverage=1 00:05:47.366 --rc genhtml_legend=1 00:05:47.366 --rc geninfo_all_blocks=1 00:05:47.366 --rc geninfo_unexecuted_blocks=1 00:05:47.366 00:05:47.366 ' 00:05:47.366 08:56:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.366 --rc genhtml_branch_coverage=1 00:05:47.366 --rc genhtml_function_coverage=1 00:05:47.366 --rc genhtml_legend=1 00:05:47.366 --rc geninfo_all_blocks=1 00:05:47.366 --rc geninfo_unexecuted_blocks=1 00:05:47.366 00:05:47.366 ' 00:05:47.366 08:56:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.366 --rc genhtml_branch_coverage=1 00:05:47.366 --rc genhtml_function_coverage=1 00:05:47.366 --rc genhtml_legend=1 00:05:47.366 --rc geninfo_all_blocks=1 00:05:47.366 --rc geninfo_unexecuted_blocks=1 00:05:47.366 00:05:47.366 ' 00:05:47.366 08:56:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.366 --rc genhtml_branch_coverage=1 00:05:47.366 --rc genhtml_function_coverage=1 00:05:47.366 --rc genhtml_legend=1 00:05:47.366 --rc geninfo_all_blocks=1 00:05:47.366 --rc geninfo_unexecuted_blocks=1 00:05:47.366 00:05:47.366 ' 00:05:47.366 08:56:24 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:47.366 08:56:24 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:47.366 08:56:24 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.366 08:56:24 -- accel/accel.sh@59 -- # spdk_tgt_pid=56047 00:05:47.366 08:56:24 -- accel/accel.sh@60 -- # waitforlisten 56047 00:05:47.366 08:56:24 -- common/autotest_common.sh@829 -- # '[' -z 56047 ']' 00:05:47.366 08:56:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.366 08:56:24 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:47.366 08:56:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.366 08:56:24 -- accel/accel.sh@58 -- # build_accel_config 00:05:47.366 08:56:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.366 08:56:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.366 08:56:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.366 08:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:47.366 08:56:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.366 08:56:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.366 08:56:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.366 08:56:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.366 08:56:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.366 08:56:24 -- accel/accel.sh@42 -- # jq -r . 00:05:47.366 [2024-11-17 08:56:24.242407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.366 [2024-11-17 08:56:24.242515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56047 ] 00:05:47.625 [2024-11-17 08:56:24.372835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.625 [2024-11-17 08:56:24.421400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.625 [2024-11-17 08:56:24.421649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.563 08:56:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.563 08:56:25 -- common/autotest_common.sh@862 -- # return 0 00:05:48.563 08:56:25 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:48.563 08:56:25 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:48.563 08:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.563 08:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.563 08:56:25 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:48.563 08:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.563 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.563 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.563 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.564 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.564 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.564 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.564 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.564 08:56:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # IFS== 00:05:48.564 08:56:25 -- accel/accel.sh@64 -- # read -r opc module 00:05:48.564 08:56:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:48.564 08:56:25 -- accel/accel.sh@67 -- # killprocess 56047 00:05:48.564 08:56:25 -- common/autotest_common.sh@936 -- # '[' -z 56047 ']' 00:05:48.564 08:56:25 -- common/autotest_common.sh@940 -- # kill -0 56047 00:05:48.564 08:56:25 -- common/autotest_common.sh@941 -- # uname 00:05:48.564 08:56:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.564 08:56:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56047 00:05:48.564 08:56:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.564 08:56:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.564 killing process with pid 56047 00:05:48.564 08:56:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56047' 00:05:48.564 08:56:25 -- common/autotest_common.sh@955 -- # kill 56047 00:05:48.564 08:56:25 -- common/autotest_common.sh@960 -- # wait 56047 00:05:48.823 08:56:25 -- accel/accel.sh@68 -- # trap - ERR 00:05:48.823 08:56:25 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:48.823 08:56:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:48.823 08:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.823 08:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.823 08:56:25 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:48.823 08:56:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:48.823 08:56:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.823 08:56:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.823 08:56:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.823 08:56:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.823 08:56:25 -- accel/accel.sh@42 -- # jq -r . 00:05:48.823 08:56:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.823 08:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.823 08:56:25 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:48.823 08:56:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.823 08:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.823 08:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.823 ************************************ 00:05:48.823 START TEST accel_missing_filename 00:05:48.823 ************************************ 00:05:48.823 08:56:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:48.823 08:56:25 -- common/autotest_common.sh@650 -- # local es=0 00:05:48.823 08:56:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:48.823 08:56:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:48.823 08:56:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.823 08:56:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:48.823 08:56:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.823 08:56:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:48.823 08:56:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:48.823 08:56:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.823 08:56:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.823 08:56:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.823 08:56:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.823 08:56:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.823 08:56:25 -- accel/accel.sh@42 -- # jq -r . 00:05:48.823 [2024-11-17 08:56:25.671144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.823 [2024-11-17 08:56:25.671246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56093 ] 00:05:49.082 [2024-11-17 08:56:25.802451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.082 [2024-11-17 08:56:25.854486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.082 [2024-11-17 08:56:25.882931] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.082 [2024-11-17 08:56:25.922934] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:49.082 A filename is required. 00:05:49.082 08:56:26 -- common/autotest_common.sh@653 -- # es=234 00:05:49.082 08:56:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.082 08:56:26 -- common/autotest_common.sh@662 -- # es=106 00:05:49.082 08:56:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:49.082 08:56:26 -- common/autotest_common.sh@670 -- # es=1 00:05:49.082 08:56:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.082 00:05:49.082 real 0m0.356s 00:05:49.082 user 0m0.239s 00:05:49.082 sys 0m0.064s 00:05:49.082 08:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.082 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.082 ************************************ 00:05:49.082 END TEST accel_missing_filename 00:05:49.082 ************************************ 00:05:49.341 08:56:26 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:49.341 08:56:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:49.341 08:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.341 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.341 ************************************ 00:05:49.341 START TEST accel_compress_verify 00:05:49.341 ************************************ 00:05:49.341 08:56:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:49.341 08:56:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:49.341 08:56:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:49.341 08:56:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:49.341 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.341 08:56:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:49.341 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.341 08:56:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:49.341 08:56:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:49.341 08:56:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.341 08:56:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.342 08:56:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.342 08:56:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.342 08:56:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.342 08:56:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.342 08:56:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.342 08:56:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.342 [2024-11-17 08:56:26.079670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.342 [2024-11-17 08:56:26.079757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56123 ] 00:05:49.342 [2024-11-17 08:56:26.218048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.606 [2024-11-17 08:56:26.268351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.606 [2024-11-17 08:56:26.296351] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.606 [2024-11-17 08:56:26.332728] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:49.606 00:05:49.606 Compression does not support the verify option, aborting. 00:05:49.606 08:56:26 -- common/autotest_common.sh@653 -- # es=161 00:05:49.606 08:56:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.606 08:56:26 -- common/autotest_common.sh@662 -- # es=33 00:05:49.606 08:56:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:49.606 08:56:26 -- common/autotest_common.sh@670 -- # es=1 00:05:49.606 08:56:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.606 00:05:49.606 real 0m0.365s 00:05:49.606 user 0m0.247s 00:05:49.606 sys 0m0.067s 00:05:49.606 ************************************ 00:05:49.606 END TEST accel_compress_verify 00:05:49.606 ************************************ 00:05:49.606 08:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.606 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.606 08:56:26 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:49.606 08:56:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.606 08:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.606 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.606 ************************************ 00:05:49.606 START TEST accel_wrong_workload 00:05:49.606 ************************************ 00:05:49.606 08:56:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:49.606 08:56:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:49.606 08:56:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:49.606 08:56:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:49.606 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.606 08:56:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:49.606 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.606 08:56:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:49.606 08:56:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:49.606 08:56:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.606 08:56:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.606 08:56:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.606 08:56:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.606 08:56:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.606 08:56:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.606 08:56:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.606 08:56:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.606 Unsupported workload type: foobar 00:05:49.606 [2024-11-17 08:56:26.492021] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:49.607 accel_perf options: 00:05:49.607 [-h help message] 00:05:49.607 [-q queue depth per core] 00:05:49.607 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:49.607 [-T number of threads per core 00:05:49.607 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:49.607 [-t time in seconds] 00:05:49.607 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:49.607 [ dif_verify, , dif_generate, dif_generate_copy 00:05:49.607 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:49.607 [-l for compress/decompress workloads, name of uncompressed input file 00:05:49.607 [-S for crc32c workload, use this seed value (default 0) 00:05:49.607 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:49.607 [-f for fill workload, use this BYTE value (default 255) 00:05:49.607 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:49.607 [-y verify result if this switch is on] 00:05:49.607 [-a tasks to allocate per core (default: same value as -q)] 00:05:49.607 Can be used to spread operations across a wider range of memory. 00:05:49.607 ************************************ 00:05:49.607 END TEST accel_wrong_workload 00:05:49.607 ************************************ 00:05:49.607 08:56:26 -- common/autotest_common.sh@653 -- # es=1 00:05:49.607 08:56:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.607 08:56:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.607 08:56:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.607 00:05:49.607 real 0m0.033s 00:05:49.607 user 0m0.022s 00:05:49.607 sys 0m0.010s 00:05:49.607 08:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.607 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.867 08:56:26 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:49.867 08:56:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:49.867 08:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.867 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.867 ************************************ 00:05:49.867 START TEST accel_negative_buffers 00:05:49.867 ************************************ 00:05:49.867 08:56:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:49.867 08:56:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:49.867 08:56:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:49.867 08:56:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:49.867 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.867 08:56:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:49.867 08:56:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.867 08:56:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:49.867 08:56:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:49.867 08:56:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.867 08:56:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.867 08:56:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.867 08:56:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.867 08:56:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.867 08:56:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.867 08:56:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.867 08:56:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.867 -x option must be non-negative. 00:05:49.867 [2024-11-17 08:56:26.567901] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:49.867 accel_perf options: 00:05:49.867 [-h help message] 00:05:49.867 [-q queue depth per core] 00:05:49.867 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:49.867 [-T number of threads per core 00:05:49.867 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:49.867 [-t time in seconds] 00:05:49.867 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:49.867 [ dif_verify, , dif_generate, dif_generate_copy 00:05:49.867 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:49.867 [-l for compress/decompress workloads, name of uncompressed input file 00:05:49.867 [-S for crc32c workload, use this seed value (default 0) 00:05:49.867 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:49.867 [-f for fill workload, use this BYTE value (default 255) 00:05:49.867 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:49.867 [-y verify result if this switch is on] 00:05:49.867 [-a tasks to allocate per core (default: same value as -q)] 00:05:49.867 Can be used to spread operations across a wider range of memory. 00:05:49.867 08:56:26 -- common/autotest_common.sh@653 -- # es=1 00:05:49.868 08:56:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.868 08:56:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.868 08:56:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.868 00:05:49.868 real 0m0.028s 00:05:49.868 user 0m0.018s 00:05:49.868 sys 0m0.009s 00:05:49.868 ************************************ 00:05:49.868 END TEST accel_negative_buffers 00:05:49.868 ************************************ 00:05:49.868 08:56:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.868 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.868 08:56:26 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:49.868 08:56:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:49.868 08:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.868 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.868 ************************************ 00:05:49.868 START TEST accel_crc32c 00:05:49.868 ************************************ 00:05:49.868 08:56:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:49.868 08:56:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.868 08:56:26 -- accel/accel.sh@17 -- # local accel_module 00:05:49.868 08:56:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:49.868 08:56:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:49.868 08:56:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.868 08:56:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.868 08:56:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.868 08:56:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.868 08:56:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.868 08:56:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.868 08:56:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.868 08:56:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.868 [2024-11-17 08:56:26.642502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.868 [2024-11-17 08:56:26.642586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56176 ] 00:05:49.868 [2024-11-17 08:56:26.781805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.127 [2024-11-17 08:56:26.848255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.506 08:56:28 -- accel/accel.sh@18 -- # out=' 00:05:51.506 SPDK Configuration: 00:05:51.506 Core mask: 0x1 00:05:51.506 00:05:51.506 Accel Perf Configuration: 00:05:51.506 Workload Type: crc32c 00:05:51.506 CRC-32C seed: 32 00:05:51.506 Transfer size: 4096 bytes 00:05:51.506 Vector count 1 00:05:51.506 Module: software 00:05:51.506 Queue depth: 32 00:05:51.506 Allocate depth: 32 00:05:51.506 # threads/core: 1 00:05:51.506 Run time: 1 seconds 00:05:51.506 Verify: Yes 00:05:51.506 00:05:51.506 Running for 1 seconds... 00:05:51.506 00:05:51.506 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.506 ------------------------------------------------------------------------------------ 00:05:51.506 0,0 509024/s 1988 MiB/s 0 0 00:05:51.506 ==================================================================================== 00:05:51.506 Total 509024/s 1988 MiB/s 0 0' 00:05:51.506 08:56:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:51.506 08:56:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.506 08:56:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.506 08:56:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.506 08:56:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.506 08:56:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.506 08:56:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.506 08:56:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.506 08:56:28 -- accel/accel.sh@42 -- # jq -r . 00:05:51.506 [2024-11-17 08:56:28.030397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.506 [2024-11-17 08:56:28.030496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56201 ] 00:05:51.506 [2024-11-17 08:56:28.159682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.506 [2024-11-17 08:56:28.206140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=0x1 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=crc32c 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=32 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=software 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=32 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=32 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=1 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val=Yes 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.506 08:56:28 -- accel/accel.sh@21 -- # val= 00:05:51.506 08:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.506 08:56:28 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@21 -- # val= 00:05:52.456 08:56:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.456 08:56:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.456 08:56:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.456 08:56:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:52.456 08:56:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.456 00:05:52.456 real 0m2.740s 00:05:52.456 user 0m2.407s 00:05:52.456 sys 0m0.135s 00:05:52.456 08:56:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.456 08:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.456 ************************************ 00:05:52.456 END TEST accel_crc32c 00:05:52.456 ************************************ 00:05:52.716 08:56:29 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:52.716 08:56:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:52.716 08:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.716 08:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.716 ************************************ 00:05:52.716 START TEST accel_crc32c_C2 00:05:52.716 ************************************ 00:05:52.716 08:56:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:52.716 08:56:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.716 08:56:29 -- accel/accel.sh@17 -- # local accel_module 00:05:52.716 08:56:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:52.716 08:56:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:52.716 08:56:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.716 08:56:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.716 08:56:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.716 08:56:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.716 08:56:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.716 08:56:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.716 08:56:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.716 08:56:29 -- accel/accel.sh@42 -- # jq -r . 00:05:52.716 [2024-11-17 08:56:29.434827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.716 [2024-11-17 08:56:29.434927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56230 ] 00:05:52.716 [2024-11-17 08:56:29.570006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.716 [2024-11-17 08:56:29.616521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.094 08:56:30 -- accel/accel.sh@18 -- # out=' 00:05:54.094 SPDK Configuration: 00:05:54.094 Core mask: 0x1 00:05:54.094 00:05:54.094 Accel Perf Configuration: 00:05:54.094 Workload Type: crc32c 00:05:54.094 CRC-32C seed: 0 00:05:54.094 Transfer size: 4096 bytes 00:05:54.094 Vector count 2 00:05:54.094 Module: software 00:05:54.094 Queue depth: 32 00:05:54.094 Allocate depth: 32 00:05:54.094 # threads/core: 1 00:05:54.094 Run time: 1 seconds 00:05:54.094 Verify: Yes 00:05:54.094 00:05:54.094 Running for 1 seconds... 00:05:54.094 00:05:54.094 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.094 ------------------------------------------------------------------------------------ 00:05:54.094 0,0 395712/s 3091 MiB/s 0 0 00:05:54.094 ==================================================================================== 00:05:54.094 Total 395712/s 1545 MiB/s 0 0' 00:05:54.094 08:56:30 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:30 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:54.094 08:56:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:54.094 08:56:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.094 08:56:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.094 08:56:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.094 08:56:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.094 08:56:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.094 08:56:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.094 08:56:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.094 08:56:30 -- accel/accel.sh@42 -- # jq -r . 00:05:54.094 [2024-11-17 08:56:30.787292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.094 [2024-11-17 08:56:30.787393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56244 ] 00:05:54.094 [2024-11-17 08:56:30.923052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.094 [2024-11-17 08:56:30.972505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.094 08:56:30 -- accel/accel.sh@21 -- # val= 00:05:54.094 08:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:30 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:30 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:30 -- accel/accel.sh@21 -- # val= 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val=0x1 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val= 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val= 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val=crc32c 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val=0 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val= 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val=software 00:05:54.094 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.094 08:56:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.094 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.094 08:56:31 -- accel/accel.sh@21 -- # val=32 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val=32 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val=1 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val=Yes 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val= 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.095 08:56:31 -- accel/accel.sh@21 -- # val= 00:05:54.095 08:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.095 08:56:31 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@21 -- # val= 00:05:55.473 08:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # IFS=: 00:05:55.473 08:56:32 -- accel/accel.sh@20 -- # read -r var val 00:05:55.473 08:56:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.473 08:56:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:55.473 08:56:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.473 00:05:55.473 real 0m2.713s 00:05:55.473 user 0m2.382s 00:05:55.473 sys 0m0.133s 00:05:55.473 08:56:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.473 08:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.473 ************************************ 00:05:55.473 END TEST accel_crc32c_C2 00:05:55.473 ************************************ 00:05:55.473 08:56:32 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:55.473 08:56:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:55.473 08:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.473 08:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.473 ************************************ 00:05:55.473 START TEST accel_copy 00:05:55.473 ************************************ 00:05:55.473 08:56:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:55.473 08:56:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.473 08:56:32 -- accel/accel.sh@17 -- # local accel_module 00:05:55.473 08:56:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:55.473 08:56:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:55.473 08:56:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.473 08:56:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.473 08:56:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.474 08:56:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.474 08:56:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.474 08:56:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.474 08:56:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.474 08:56:32 -- accel/accel.sh@42 -- # jq -r . 00:05:55.474 [2024-11-17 08:56:32.198920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.474 [2024-11-17 08:56:32.199016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56283 ] 00:05:55.474 [2024-11-17 08:56:32.329576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.474 [2024-11-17 08:56:32.376315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.852 08:56:33 -- accel/accel.sh@18 -- # out=' 00:05:56.852 SPDK Configuration: 00:05:56.852 Core mask: 0x1 00:05:56.852 00:05:56.852 Accel Perf Configuration: 00:05:56.852 Workload Type: copy 00:05:56.852 Transfer size: 4096 bytes 00:05:56.852 Vector count 1 00:05:56.852 Module: software 00:05:56.852 Queue depth: 32 00:05:56.852 Allocate depth: 32 00:05:56.852 # threads/core: 1 00:05:56.852 Run time: 1 seconds 00:05:56.852 Verify: Yes 00:05:56.852 00:05:56.852 Running for 1 seconds... 00:05:56.852 00:05:56.852 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.852 ------------------------------------------------------------------------------------ 00:05:56.852 0,0 362016/s 1414 MiB/s 0 0 00:05:56.852 ==================================================================================== 00:05:56.852 Total 362016/s 1414 MiB/s 0 0' 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:56.852 08:56:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:56.852 08:56:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.852 08:56:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.852 08:56:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.852 08:56:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.852 08:56:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.852 08:56:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.852 08:56:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.852 08:56:33 -- accel/accel.sh@42 -- # jq -r . 00:05:56.852 [2024-11-17 08:56:33.554611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.852 [2024-11-17 08:56:33.554710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56298 ] 00:05:56.852 [2024-11-17 08:56:33.688837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.852 [2024-11-17 08:56:33.737546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=0x1 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=copy 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=software 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=32 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=32 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val=1 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.852 08:56:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.852 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.852 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.853 08:56:33 -- accel/accel.sh@21 -- # val=Yes 00:05:56.853 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.853 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.853 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:56.853 08:56:33 -- accel/accel.sh@21 -- # val= 00:05:56.853 08:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # IFS=: 00:05:56.853 08:56:33 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@21 -- # val= 00:05:58.231 08:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # IFS=: 00:05:58.231 08:56:34 -- accel/accel.sh@20 -- # read -r var val 00:05:58.231 08:56:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.231 08:56:34 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:58.231 08:56:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.231 00:05:58.231 real 0m2.713s 00:05:58.231 user 0m2.385s 00:05:58.231 sys 0m0.128s 00:05:58.231 08:56:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.231 08:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:58.231 ************************************ 00:05:58.231 END TEST accel_copy 00:05:58.231 ************************************ 00:05:58.231 08:56:34 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.231 08:56:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:58.231 08:56:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.231 08:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:58.231 ************************************ 00:05:58.231 START TEST accel_fill 00:05:58.231 ************************************ 00:05:58.231 08:56:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.231 08:56:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.231 08:56:34 -- accel/accel.sh@17 -- # local accel_module 00:05:58.231 08:56:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.231 08:56:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.231 08:56:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.231 08:56:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.231 08:56:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.231 08:56:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.231 08:56:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.231 08:56:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.231 08:56:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.231 08:56:34 -- accel/accel.sh@42 -- # jq -r . 00:05:58.231 [2024-11-17 08:56:34.967606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.231 [2024-11-17 08:56:34.967705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56327 ] 00:05:58.231 [2024-11-17 08:56:35.104531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.231 [2024-11-17 08:56:35.150687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.611 08:56:36 -- accel/accel.sh@18 -- # out=' 00:05:59.611 SPDK Configuration: 00:05:59.611 Core mask: 0x1 00:05:59.611 00:05:59.611 Accel Perf Configuration: 00:05:59.611 Workload Type: fill 00:05:59.611 Fill pattern: 0x80 00:05:59.611 Transfer size: 4096 bytes 00:05:59.611 Vector count 1 00:05:59.611 Module: software 00:05:59.611 Queue depth: 64 00:05:59.611 Allocate depth: 64 00:05:59.611 # threads/core: 1 00:05:59.611 Run time: 1 seconds 00:05:59.611 Verify: Yes 00:05:59.611 00:05:59.611 Running for 1 seconds... 00:05:59.611 00:05:59.611 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.611 ------------------------------------------------------------------------------------ 00:05:59.611 0,0 518272/s 2024 MiB/s 0 0 00:05:59.611 ==================================================================================== 00:05:59.611 Total 518272/s 2024 MiB/s 0 0' 00:05:59.611 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.611 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.611 08:56:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.611 08:56:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.611 08:56:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.611 08:56:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.611 08:56:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.611 08:56:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.611 08:56:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.611 08:56:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.611 08:56:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.611 08:56:36 -- accel/accel.sh@42 -- # jq -r . 00:05:59.611 [2024-11-17 08:56:36.328146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.611 [2024-11-17 08:56:36.328240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56347 ] 00:05:59.611 [2024-11-17 08:56:36.466009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.611 [2024-11-17 08:56:36.513159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val=0x1 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val=fill 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.870 08:56:36 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.870 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.870 08:56:36 -- accel/accel.sh@21 -- # val=0x80 00:05:59.870 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val=software 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val=64 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val=64 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val=1 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val=Yes 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.871 08:56:36 -- accel/accel.sh@21 -- # val= 00:05:59.871 08:56:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.871 08:56:36 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@21 -- # val= 00:06:00.838 08:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # IFS=: 00:06:00.838 08:56:37 -- accel/accel.sh@20 -- # read -r var val 00:06:00.838 08:56:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.838 08:56:37 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:00.838 08:56:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.838 00:06:00.838 real 0m2.740s 00:06:00.838 user 0m2.395s 00:06:00.838 sys 0m0.147s 00:06:00.838 08:56:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.838 08:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.838 ************************************ 00:06:00.838 END TEST accel_fill 00:06:00.838 ************************************ 00:06:00.838 08:56:37 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:00.838 08:56:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:00.838 08:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.838 08:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.838 ************************************ 00:06:00.838 START TEST accel_copy_crc32c 00:06:00.838 ************************************ 00:06:00.838 08:56:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:00.838 08:56:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.838 08:56:37 -- accel/accel.sh@17 -- # local accel_module 00:06:00.838 08:56:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:00.838 08:56:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:00.838 08:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.838 08:56:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.838 08:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.838 08:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.838 08:56:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.838 08:56:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.838 08:56:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.838 08:56:37 -- accel/accel.sh@42 -- # jq -r . 00:06:00.838 [2024-11-17 08:56:37.756247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.839 [2024-11-17 08:56:37.756361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56381 ] 00:06:01.097 [2024-11-17 08:56:37.890734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.097 [2024-11-17 08:56:37.940094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.481 08:56:39 -- accel/accel.sh@18 -- # out=' 00:06:02.481 SPDK Configuration: 00:06:02.481 Core mask: 0x1 00:06:02.481 00:06:02.481 Accel Perf Configuration: 00:06:02.481 Workload Type: copy_crc32c 00:06:02.481 CRC-32C seed: 0 00:06:02.482 Vector size: 4096 bytes 00:06:02.482 Transfer size: 4096 bytes 00:06:02.482 Vector count 1 00:06:02.482 Module: software 00:06:02.482 Queue depth: 32 00:06:02.482 Allocate depth: 32 00:06:02.482 # threads/core: 1 00:06:02.482 Run time: 1 seconds 00:06:02.482 Verify: Yes 00:06:02.482 00:06:02.482 Running for 1 seconds... 00:06:02.482 00:06:02.482 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.482 ------------------------------------------------------------------------------------ 00:06:02.482 0,0 282656/s 1104 MiB/s 0 0 00:06:02.482 ==================================================================================== 00:06:02.482 Total 282656/s 1104 MiB/s 0 0' 00:06:02.482 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.482 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.482 08:56:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:02.482 08:56:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:02.482 08:56:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.482 08:56:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.482 08:56:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.482 08:56:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.482 08:56:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.482 08:56:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.482 08:56:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.482 08:56:39 -- accel/accel.sh@42 -- # jq -r . 00:06:02.483 [2024-11-17 08:56:39.106326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.483 [2024-11-17 08:56:39.106425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56400 ] 00:06:02.483 [2024-11-17 08:56:39.241655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.483 [2024-11-17 08:56:39.287839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val=0x1 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.483 08:56:39 -- accel/accel.sh@21 -- # val=0 00:06:02.483 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.483 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val=software 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val=32 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.484 08:56:39 -- accel/accel.sh@21 -- # val=32 00:06:02.484 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.484 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.485 08:56:39 -- accel/accel.sh@21 -- # val=1 00:06:02.485 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.485 08:56:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.485 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.485 08:56:39 -- accel/accel.sh@21 -- # val=Yes 00:06:02.485 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.485 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.485 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.485 08:56:39 -- accel/accel.sh@21 -- # val= 00:06:02.485 08:56:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.485 08:56:39 -- accel/accel.sh@20 -- # read -r var val 00:06:03.921 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.921 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.921 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.921 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.921 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.921 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.921 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.921 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.922 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.922 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.922 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.922 08:56:40 -- accel/accel.sh@21 -- # val= 00:06:03.922 08:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # IFS=: 00:06:03.922 08:56:40 -- accel/accel.sh@20 -- # read -r var val 00:06:03.922 08:56:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.922 08:56:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:03.922 08:56:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.922 00:06:03.922 real 0m2.714s 00:06:03.922 user 0m2.359s 00:06:03.922 sys 0m0.155s 00:06:03.922 08:56:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.922 ************************************ 00:06:03.922 END TEST accel_copy_crc32c 00:06:03.922 ************************************ 00:06:03.922 08:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:03.922 08:56:40 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.922 08:56:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:03.922 08:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.922 08:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:03.922 ************************************ 00:06:03.922 START TEST accel_copy_crc32c_C2 00:06:03.922 ************************************ 00:06:03.922 08:56:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.922 08:56:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.922 08:56:40 -- accel/accel.sh@17 -- # local accel_module 00:06:03.922 08:56:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:03.922 08:56:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:03.922 08:56:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.922 08:56:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.922 08:56:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.922 08:56:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.922 08:56:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.922 08:56:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.922 08:56:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.922 08:56:40 -- accel/accel.sh@42 -- # jq -r . 00:06:03.922 [2024-11-17 08:56:40.516686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.922 [2024-11-17 08:56:40.516785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56430 ] 00:06:03.922 [2024-11-17 08:56:40.654840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.922 [2024-11-17 08:56:40.721857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.300 08:56:41 -- accel/accel.sh@18 -- # out=' 00:06:05.300 SPDK Configuration: 00:06:05.300 Core mask: 0x1 00:06:05.300 00:06:05.300 Accel Perf Configuration: 00:06:05.300 Workload Type: copy_crc32c 00:06:05.300 CRC-32C seed: 0 00:06:05.300 Vector size: 4096 bytes 00:06:05.300 Transfer size: 8192 bytes 00:06:05.300 Vector count 2 00:06:05.300 Module: software 00:06:05.300 Queue depth: 32 00:06:05.300 Allocate depth: 32 00:06:05.300 # threads/core: 1 00:06:05.300 Run time: 1 seconds 00:06:05.300 Verify: Yes 00:06:05.300 00:06:05.300 Running for 1 seconds... 00:06:05.300 00:06:05.300 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.300 ------------------------------------------------------------------------------------ 00:06:05.300 0,0 194368/s 1518 MiB/s 0 0 00:06:05.300 ==================================================================================== 00:06:05.300 Total 194368/s 759 MiB/s 0 0' 00:06:05.300 08:56:41 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:41 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:05.300 08:56:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:05.300 08:56:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.300 08:56:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.300 08:56:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.300 08:56:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.300 08:56:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.300 08:56:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.300 08:56:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.300 08:56:41 -- accel/accel.sh@42 -- # jq -r . 00:06:05.300 [2024-11-17 08:56:41.913735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.300 [2024-11-17 08:56:41.913825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56449 ] 00:06:05.300 [2024-11-17 08:56:42.048560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.300 [2024-11-17 08:56:42.096933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=0x1 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=0 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=software 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=32 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=32 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.300 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.300 08:56:42 -- accel/accel.sh@21 -- # val=1 00:06:05.300 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.301 08:56:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.301 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.301 08:56:42 -- accel/accel.sh@21 -- # val=Yes 00:06:05.301 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.301 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.301 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:05.301 08:56:42 -- accel/accel.sh@21 -- # val= 00:06:05.301 08:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # IFS=: 00:06:05.301 08:56:42 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@21 -- # val= 00:06:06.679 08:56:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # IFS=: 00:06:06.679 08:56:43 -- accel/accel.sh@20 -- # read -r var val 00:06:06.679 08:56:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.679 08:56:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:06.679 08:56:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.679 00:06:06.679 real 0m2.761s 00:06:06.679 user 0m2.413s 00:06:06.679 sys 0m0.150s 00:06:06.679 08:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.679 08:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.679 ************************************ 00:06:06.679 END TEST accel_copy_crc32c_C2 00:06:06.679 ************************************ 00:06:06.679 08:56:43 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:06.679 08:56:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:06.679 08:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.679 08:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.679 ************************************ 00:06:06.679 START TEST accel_dualcast 00:06:06.679 ************************************ 00:06:06.679 08:56:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:06.679 08:56:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.679 08:56:43 -- accel/accel.sh@17 -- # local accel_module 00:06:06.679 08:56:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:06.679 08:56:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:06.679 08:56:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.679 08:56:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.679 08:56:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.679 08:56:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.679 08:56:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.679 08:56:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.679 08:56:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.679 08:56:43 -- accel/accel.sh@42 -- # jq -r . 00:06:06.679 [2024-11-17 08:56:43.330152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.679 [2024-11-17 08:56:43.330242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56484 ] 00:06:06.679 [2024-11-17 08:56:43.469561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.679 [2024-11-17 08:56:43.536919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.058 08:56:44 -- accel/accel.sh@18 -- # out=' 00:06:08.058 SPDK Configuration: 00:06:08.058 Core mask: 0x1 00:06:08.058 00:06:08.058 Accel Perf Configuration: 00:06:08.058 Workload Type: dualcast 00:06:08.058 Transfer size: 4096 bytes 00:06:08.058 Vector count 1 00:06:08.058 Module: software 00:06:08.058 Queue depth: 32 00:06:08.058 Allocate depth: 32 00:06:08.058 # threads/core: 1 00:06:08.058 Run time: 1 seconds 00:06:08.058 Verify: Yes 00:06:08.058 00:06:08.058 Running for 1 seconds... 00:06:08.058 00:06:08.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.058 ------------------------------------------------------------------------------------ 00:06:08.058 0,0 377952/s 1476 MiB/s 0 0 00:06:08.058 ==================================================================================== 00:06:08.058 Total 377952/s 1476 MiB/s 0 0' 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:08.058 08:56:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:08.058 08:56:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.058 08:56:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.058 08:56:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.058 08:56:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.058 08:56:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.058 08:56:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.058 08:56:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.058 08:56:44 -- accel/accel.sh@42 -- # jq -r . 00:06:08.058 [2024-11-17 08:56:44.716341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.058 [2024-11-17 08:56:44.716416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56498 ] 00:06:08.058 [2024-11-17 08:56:44.849434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.058 [2024-11-17 08:56:44.899364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=0x1 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=dualcast 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=software 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=32 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=32 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 08:56:44 -- accel/accel.sh@21 -- # val=1 00:06:08.058 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 08:56:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.059 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 08:56:44 -- accel/accel.sh@21 -- # val=Yes 00:06:08.059 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.059 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 08:56:44 -- accel/accel.sh@21 -- # val= 00:06:08.059 08:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 08:56:44 -- accel/accel.sh@20 -- # read -r var val 00:06:09.438 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.438 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.438 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.438 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.438 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.438 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.438 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.438 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.438 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.438 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.438 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.439 08:56:46 -- accel/accel.sh@21 -- # val= 00:06:09.439 ************************************ 00:06:09.439 END TEST accel_dualcast 00:06:09.439 ************************************ 00:06:09.439 08:56:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.439 08:56:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.439 08:56:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.439 08:56:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.439 08:56:46 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:09.439 08:56:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.439 00:06:09.439 real 0m2.753s 00:06:09.439 user 0m2.398s 00:06:09.439 sys 0m0.152s 00:06:09.439 08:56:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.439 08:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:09.439 08:56:46 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:09.439 08:56:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.439 08:56:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.439 08:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:09.439 ************************************ 00:06:09.439 START TEST accel_compare 00:06:09.439 ************************************ 00:06:09.439 08:56:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:09.439 08:56:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.439 08:56:46 -- accel/accel.sh@17 -- # local accel_module 00:06:09.439 08:56:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:09.439 08:56:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:09.439 08:56:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.439 08:56:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.439 08:56:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.439 08:56:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.439 08:56:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.439 08:56:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.439 08:56:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.439 08:56:46 -- accel/accel.sh@42 -- # jq -r . 00:06:09.439 [2024-11-17 08:56:46.128122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.439 [2024-11-17 08:56:46.128366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56532 ] 00:06:09.439 [2024-11-17 08:56:46.260875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.439 [2024-11-17 08:56:46.309370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.817 08:56:47 -- accel/accel.sh@18 -- # out=' 00:06:10.817 SPDK Configuration: 00:06:10.817 Core mask: 0x1 00:06:10.817 00:06:10.817 Accel Perf Configuration: 00:06:10.817 Workload Type: compare 00:06:10.817 Transfer size: 4096 bytes 00:06:10.817 Vector count 1 00:06:10.817 Module: software 00:06:10.817 Queue depth: 32 00:06:10.817 Allocate depth: 32 00:06:10.817 # threads/core: 1 00:06:10.817 Run time: 1 seconds 00:06:10.817 Verify: Yes 00:06:10.817 00:06:10.817 Running for 1 seconds... 00:06:10.817 00:06:10.817 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.817 ------------------------------------------------------------------------------------ 00:06:10.817 0,0 516864/s 2019 MiB/s 0 0 00:06:10.817 ==================================================================================== 00:06:10.817 Total 516864/s 2019 MiB/s 0 0' 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:10.817 08:56:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:10.817 08:56:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.817 08:56:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.817 08:56:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.817 08:56:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.817 08:56:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.817 08:56:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.817 08:56:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.817 08:56:47 -- accel/accel.sh@42 -- # jq -r . 00:06:10.817 [2024-11-17 08:56:47.471072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.817 [2024-11-17 08:56:47.471633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56552 ] 00:06:10.817 [2024-11-17 08:56:47.600750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.817 [2024-11-17 08:56:47.647884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=0x1 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=compare 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=software 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=32 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=32 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=1 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val=Yes 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:10.817 08:56:47 -- accel/accel.sh@21 -- # val= 00:06:10.817 08:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # IFS=: 00:06:10.817 08:56:47 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@21 -- # val= 00:06:12.195 08:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # IFS=: 00:06:12.195 08:56:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.195 08:56:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.195 08:56:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:12.195 08:56:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.195 00:06:12.195 real 0m2.693s 00:06:12.195 user 0m2.360s 00:06:12.195 sys 0m0.129s 00:06:12.195 08:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.195 08:56:48 -- common/autotest_common.sh@10 -- # set +x 00:06:12.195 ************************************ 00:06:12.195 END TEST accel_compare 00:06:12.195 ************************************ 00:06:12.195 08:56:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:12.195 08:56:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.195 08:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.195 08:56:48 -- common/autotest_common.sh@10 -- # set +x 00:06:12.195 ************************************ 00:06:12.195 START TEST accel_xor 00:06:12.195 ************************************ 00:06:12.195 08:56:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:12.195 08:56:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.195 08:56:48 -- accel/accel.sh@17 -- # local accel_module 00:06:12.195 08:56:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:12.195 08:56:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:12.195 08:56:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.195 08:56:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.195 08:56:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.195 08:56:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.195 08:56:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.195 08:56:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.195 08:56:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.195 08:56:48 -- accel/accel.sh@42 -- # jq -r . 00:06:12.195 [2024-11-17 08:56:48.874171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.195 [2024-11-17 08:56:48.874260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56581 ] 00:06:12.195 [2024-11-17 08:56:49.013360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.195 [2024-11-17 08:56:49.060811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.581 08:56:50 -- accel/accel.sh@18 -- # out=' 00:06:13.581 SPDK Configuration: 00:06:13.581 Core mask: 0x1 00:06:13.581 00:06:13.581 Accel Perf Configuration: 00:06:13.581 Workload Type: xor 00:06:13.581 Source buffers: 2 00:06:13.581 Transfer size: 4096 bytes 00:06:13.581 Vector count 1 00:06:13.581 Module: software 00:06:13.581 Queue depth: 32 00:06:13.581 Allocate depth: 32 00:06:13.581 # threads/core: 1 00:06:13.581 Run time: 1 seconds 00:06:13.581 Verify: Yes 00:06:13.581 00:06:13.581 Running for 1 seconds... 00:06:13.581 00:06:13.581 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.581 ------------------------------------------------------------------------------------ 00:06:13.581 0,0 283264/s 1106 MiB/s 0 0 00:06:13.581 ==================================================================================== 00:06:13.581 Total 283264/s 1106 MiB/s 0 0' 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:13.581 08:56:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:13.581 08:56:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.581 08:56:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.581 08:56:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.581 08:56:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.581 08:56:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.581 08:56:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.581 08:56:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.581 08:56:50 -- accel/accel.sh@42 -- # jq -r . 00:06:13.581 [2024-11-17 08:56:50.221414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.581 [2024-11-17 08:56:50.221498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56600 ] 00:06:13.581 [2024-11-17 08:56:50.353536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.581 [2024-11-17 08:56:50.400090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=0x1 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=xor 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=2 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=software 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=32 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=32 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=1 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val=Yes 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:13.581 08:56:50 -- accel/accel.sh@21 -- # val= 00:06:13.581 08:56:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # IFS=: 00:06:13.581 08:56:50 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 ************************************ 00:06:14.960 END TEST accel_xor 00:06:14.960 ************************************ 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@21 -- # val= 00:06:14.960 08:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.960 08:56:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.960 08:56:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.960 08:56:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:14.960 08:56:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.960 00:06:14.960 real 0m2.699s 00:06:14.960 user 0m2.365s 00:06:14.960 sys 0m0.130s 00:06:14.960 08:56:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.960 08:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.960 08:56:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:14.960 08:56:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.960 08:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.960 08:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.960 ************************************ 00:06:14.960 START TEST accel_xor 00:06:14.960 ************************************ 00:06:14.960 08:56:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:14.960 08:56:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.960 08:56:51 -- accel/accel.sh@17 -- # local accel_module 00:06:14.960 08:56:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:14.960 08:56:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:14.960 08:56:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.960 08:56:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.960 08:56:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.960 08:56:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.960 08:56:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.960 08:56:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.960 08:56:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.960 08:56:51 -- accel/accel.sh@42 -- # jq -r . 00:06:14.960 [2024-11-17 08:56:51.626059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.960 [2024-11-17 08:56:51.626313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56635 ] 00:06:14.960 [2024-11-17 08:56:51.760770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.960 [2024-11-17 08:56:51.807765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.339 08:56:52 -- accel/accel.sh@18 -- # out=' 00:06:16.339 SPDK Configuration: 00:06:16.339 Core mask: 0x1 00:06:16.339 00:06:16.339 Accel Perf Configuration: 00:06:16.339 Workload Type: xor 00:06:16.339 Source buffers: 3 00:06:16.339 Transfer size: 4096 bytes 00:06:16.339 Vector count 1 00:06:16.339 Module: software 00:06:16.339 Queue depth: 32 00:06:16.339 Allocate depth: 32 00:06:16.339 # threads/core: 1 00:06:16.339 Run time: 1 seconds 00:06:16.339 Verify: Yes 00:06:16.339 00:06:16.339 Running for 1 seconds... 00:06:16.339 00:06:16.339 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.339 ------------------------------------------------------------------------------------ 00:06:16.339 0,0 273472/s 1068 MiB/s 0 0 00:06:16.339 ==================================================================================== 00:06:16.339 Total 273472/s 1068 MiB/s 0 0' 00:06:16.339 08:56:52 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:52 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.339 08:56:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.339 08:56:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.339 08:56:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.339 08:56:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.339 08:56:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.339 08:56:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.339 08:56:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.339 08:56:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.339 08:56:52 -- accel/accel.sh@42 -- # jq -r . 00:06:16.339 [2024-11-17 08:56:52.975387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.339 [2024-11-17 08:56:52.975468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56649 ] 00:06:16.339 [2024-11-17 08:56:53.103679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.339 [2024-11-17 08:56:53.155749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=0x1 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=xor 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=3 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=software 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=32 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=32 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val=1 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.339 08:56:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.339 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.339 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.340 08:56:53 -- accel/accel.sh@21 -- # val=Yes 00:06:16.340 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.340 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.340 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:16.340 08:56:53 -- accel/accel.sh@21 -- # val= 00:06:16.340 08:56:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # IFS=: 00:06:16.340 08:56:53 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@21 -- # val= 00:06:17.719 08:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.719 08:56:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.719 08:56:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.719 08:56:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:17.719 08:56:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.719 00:06:17.719 real 0m2.706s 00:06:17.719 user 0m2.373s 00:06:17.719 sys 0m0.129s 00:06:17.719 08:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.719 08:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:17.719 ************************************ 00:06:17.719 END TEST accel_xor 00:06:17.719 ************************************ 00:06:17.719 08:56:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:17.719 08:56:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.719 08:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.719 08:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:17.719 ************************************ 00:06:17.719 START TEST accel_dif_verify 00:06:17.719 ************************************ 00:06:17.719 08:56:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:17.719 08:56:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.719 08:56:54 -- accel/accel.sh@17 -- # local accel_module 00:06:17.719 08:56:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:17.720 08:56:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.720 08:56:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:17.720 08:56:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.720 08:56:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.720 08:56:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.720 08:56:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.720 08:56:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.720 08:56:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.720 08:56:54 -- accel/accel.sh@42 -- # jq -r . 00:06:17.720 [2024-11-17 08:56:54.377039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.720 [2024-11-17 08:56:54.377794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56683 ] 00:06:17.720 [2024-11-17 08:56:54.512565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.720 [2024-11-17 08:56:54.559184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.100 08:56:55 -- accel/accel.sh@18 -- # out=' 00:06:19.100 SPDK Configuration: 00:06:19.100 Core mask: 0x1 00:06:19.100 00:06:19.100 Accel Perf Configuration: 00:06:19.100 Workload Type: dif_verify 00:06:19.100 Vector size: 4096 bytes 00:06:19.100 Transfer size: 4096 bytes 00:06:19.100 Block size: 512 bytes 00:06:19.100 Metadata size: 8 bytes 00:06:19.100 Vector count 1 00:06:19.100 Module: software 00:06:19.100 Queue depth: 32 00:06:19.100 Allocate depth: 32 00:06:19.100 # threads/core: 1 00:06:19.100 Run time: 1 seconds 00:06:19.100 Verify: No 00:06:19.100 00:06:19.100 Running for 1 seconds... 00:06:19.100 00:06:19.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.100 ------------------------------------------------------------------------------------ 00:06:19.100 0,0 117600/s 466 MiB/s 0 0 00:06:19.100 ==================================================================================== 00:06:19.100 Total 117600/s 459 MiB/s 0 0' 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:19.100 08:56:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.100 08:56:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.100 08:56:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.100 08:56:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.100 08:56:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.100 08:56:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.100 08:56:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.100 08:56:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.100 08:56:55 -- accel/accel.sh@42 -- # jq -r . 00:06:19.100 [2024-11-17 08:56:55.726525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.100 [2024-11-17 08:56:55.727124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56703 ] 00:06:19.100 [2024-11-17 08:56:55.863315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.100 [2024-11-17 08:56:55.909983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val=0x1 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val=dif_verify 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val=software 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val=32 00:06:19.100 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.100 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.100 08:56:55 -- accel/accel.sh@21 -- # val=32 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.101 08:56:55 -- accel/accel.sh@21 -- # val=1 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.101 08:56:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.101 08:56:55 -- accel/accel.sh@21 -- # val=No 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.101 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:19.101 08:56:55 -- accel/accel.sh@21 -- # val= 00:06:19.101 08:56:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # IFS=: 00:06:19.101 08:56:55 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.480 08:56:57 -- accel/accel.sh@21 -- # val= 00:06:20.480 08:56:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # IFS=: 00:06:20.480 08:56:57 -- accel/accel.sh@20 -- # read -r var val 00:06:20.481 08:56:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.481 08:56:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:20.481 08:56:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.481 00:06:20.481 real 0m2.713s 00:06:20.481 user 0m1.191s 00:06:20.481 sys 0m0.067s 00:06:20.481 08:56:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.481 08:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.481 ************************************ 00:06:20.481 END TEST accel_dif_verify 00:06:20.481 ************************************ 00:06:20.481 08:56:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:20.481 08:56:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:20.481 08:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.481 08:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.481 ************************************ 00:06:20.481 START TEST accel_dif_generate 00:06:20.481 ************************************ 00:06:20.481 08:56:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:20.481 08:56:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.481 08:56:57 -- accel/accel.sh@17 -- # local accel_module 00:06:20.481 08:56:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:20.481 08:56:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:20.481 08:56:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.481 08:56:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.481 08:56:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.481 08:56:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.481 08:56:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.481 08:56:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.481 08:56:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.481 08:56:57 -- accel/accel.sh@42 -- # jq -r . 00:06:20.481 [2024-11-17 08:56:57.136486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.481 [2024-11-17 08:56:57.136583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56732 ] 00:06:20.481 [2024-11-17 08:56:57.263996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.481 [2024-11-17 08:56:57.311226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.860 08:56:58 -- accel/accel.sh@18 -- # out=' 00:06:21.860 SPDK Configuration: 00:06:21.861 Core mask: 0x1 00:06:21.861 00:06:21.861 Accel Perf Configuration: 00:06:21.861 Workload Type: dif_generate 00:06:21.861 Vector size: 4096 bytes 00:06:21.861 Transfer size: 4096 bytes 00:06:21.861 Block size: 512 bytes 00:06:21.861 Metadata size: 8 bytes 00:06:21.861 Vector count 1 00:06:21.861 Module: software 00:06:21.861 Queue depth: 32 00:06:21.861 Allocate depth: 32 00:06:21.861 # threads/core: 1 00:06:21.861 Run time: 1 seconds 00:06:21.861 Verify: No 00:06:21.861 00:06:21.861 Running for 1 seconds... 00:06:21.861 00:06:21.861 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.861 ------------------------------------------------------------------------------------ 00:06:21.861 0,0 141984/s 563 MiB/s 0 0 00:06:21.861 ==================================================================================== 00:06:21.861 Total 141984/s 554 MiB/s 0 0' 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:21.861 08:56:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:21.861 08:56:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.861 08:56:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.861 08:56:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.861 08:56:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.861 08:56:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.861 08:56:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.861 08:56:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.861 08:56:58 -- accel/accel.sh@42 -- # jq -r . 00:06:21.861 [2024-11-17 08:56:58.490883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.861 [2024-11-17 08:56:58.491005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56756 ] 00:06:21.861 [2024-11-17 08:56:58.624488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.861 [2024-11-17 08:56:58.677474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=0x1 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=dif_generate 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=software 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=32 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=32 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=1 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val=No 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.861 08:56:58 -- accel/accel.sh@21 -- # val= 00:06:21.861 08:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.861 08:56:58 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@21 -- # val= 00:06:23.248 08:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # IFS=: 00:06:23.248 08:56:59 -- accel/accel.sh@20 -- # read -r var val 00:06:23.248 08:56:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.248 08:56:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:23.248 08:56:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.248 00:06:23.248 real 0m2.711s 00:06:23.248 user 0m2.386s 00:06:23.248 sys 0m0.123s 00:06:23.248 08:56:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.248 ************************************ 00:06:23.248 END TEST accel_dif_generate 00:06:23.248 ************************************ 00:06:23.248 08:56:59 -- common/autotest_common.sh@10 -- # set +x 00:06:23.248 08:56:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:23.248 08:56:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:23.248 08:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.248 08:56:59 -- common/autotest_common.sh@10 -- # set +x 00:06:23.248 ************************************ 00:06:23.248 START TEST accel_dif_generate_copy 00:06:23.248 ************************************ 00:06:23.248 08:56:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:23.248 08:56:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.248 08:56:59 -- accel/accel.sh@17 -- # local accel_module 00:06:23.248 08:56:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:23.249 08:56:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:23.249 08:56:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.249 08:56:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.249 08:56:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.249 08:56:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.249 08:56:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.249 08:56:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.249 08:56:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.249 08:56:59 -- accel/accel.sh@42 -- # jq -r . 00:06:23.249 [2024-11-17 08:56:59.900734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.249 [2024-11-17 08:56:59.900832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56786 ] 00:06:23.249 [2024-11-17 08:57:00.038134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.249 [2024-11-17 08:57:00.085633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.628 08:57:01 -- accel/accel.sh@18 -- # out=' 00:06:24.628 SPDK Configuration: 00:06:24.628 Core mask: 0x1 00:06:24.628 00:06:24.628 Accel Perf Configuration: 00:06:24.628 Workload Type: dif_generate_copy 00:06:24.628 Vector size: 4096 bytes 00:06:24.628 Transfer size: 4096 bytes 00:06:24.628 Vector count 1 00:06:24.628 Module: software 00:06:24.628 Queue depth: 32 00:06:24.628 Allocate depth: 32 00:06:24.628 # threads/core: 1 00:06:24.628 Run time: 1 seconds 00:06:24.628 Verify: No 00:06:24.628 00:06:24.628 Running for 1 seconds... 00:06:24.628 00:06:24.628 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.628 ------------------------------------------------------------------------------------ 00:06:24.628 0,0 104544/s 414 MiB/s 0 0 00:06:24.628 ==================================================================================== 00:06:24.628 Total 104544/s 408 MiB/s 0 0' 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:24.628 08:57:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:24.628 08:57:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.628 08:57:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.628 08:57:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.628 08:57:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.628 08:57:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.628 08:57:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.628 08:57:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.628 08:57:01 -- accel/accel.sh@42 -- # jq -r . 00:06:24.628 [2024-11-17 08:57:01.248702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.628 [2024-11-17 08:57:01.248774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56800 ] 00:06:24.628 [2024-11-17 08:57:01.376685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.628 [2024-11-17 08:57:01.425215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=0x1 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=software 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=32 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=32 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=1 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val=No 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 08:57:01 -- accel/accel.sh@21 -- # val= 00:06:24.628 08:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 08:57:01 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@21 -- # val= 00:06:26.008 08:57:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # IFS=: 00:06:26.008 08:57:02 -- accel/accel.sh@20 -- # read -r var val 00:06:26.008 08:57:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.008 08:57:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:26.008 08:57:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.008 00:06:26.008 real 0m2.703s 00:06:26.008 user 0m2.368s 00:06:26.008 sys 0m0.133s 00:06:26.008 08:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.008 ************************************ 00:06:26.008 END TEST accel_dif_generate_copy 00:06:26.008 ************************************ 00:06:26.008 08:57:02 -- common/autotest_common.sh@10 -- # set +x 00:06:26.008 08:57:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:26.008 08:57:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.008 08:57:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:26.008 08:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.008 08:57:02 -- common/autotest_common.sh@10 -- # set +x 00:06:26.008 ************************************ 00:06:26.008 START TEST accel_comp 00:06:26.008 ************************************ 00:06:26.008 08:57:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.008 08:57:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.008 08:57:02 -- accel/accel.sh@17 -- # local accel_module 00:06:26.008 08:57:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.008 08:57:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.008 08:57:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.008 08:57:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.008 08:57:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.009 08:57:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.009 08:57:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.009 08:57:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.009 08:57:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.009 08:57:02 -- accel/accel.sh@42 -- # jq -r . 00:06:26.009 [2024-11-17 08:57:02.649156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.009 [2024-11-17 08:57:02.649249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56839 ] 00:06:26.009 [2024-11-17 08:57:02.789517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.009 [2024-11-17 08:57:02.859493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.441 08:57:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.441 00:06:27.441 SPDK Configuration: 00:06:27.441 Core mask: 0x1 00:06:27.441 00:06:27.441 Accel Perf Configuration: 00:06:27.441 Workload Type: compress 00:06:27.441 Transfer size: 4096 bytes 00:06:27.441 Vector count 1 00:06:27.441 Module: software 00:06:27.441 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.441 Queue depth: 32 00:06:27.441 Allocate depth: 32 00:06:27.441 # threads/core: 1 00:06:27.441 Run time: 1 seconds 00:06:27.441 Verify: No 00:06:27.441 00:06:27.441 Running for 1 seconds... 00:06:27.441 00:06:27.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.441 ------------------------------------------------------------------------------------ 00:06:27.441 0,0 51392/s 214 MiB/s 0 0 00:06:27.441 ==================================================================================== 00:06:27.441 Total 51392/s 200 MiB/s 0 0' 00:06:27.441 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.441 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.441 08:57:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.441 08:57:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.441 08:57:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.441 08:57:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.441 08:57:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.441 08:57:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.441 08:57:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.441 08:57:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.442 08:57:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.442 08:57:04 -- accel/accel.sh@42 -- # jq -r . 00:06:27.442 [2024-11-17 08:57:04.054894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.442 [2024-11-17 08:57:04.055001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56854 ] 00:06:27.442 [2024-11-17 08:57:04.190145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.442 [2024-11-17 08:57:04.238920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=0x1 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=compress 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=software 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=32 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=32 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=1 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val=No 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.442 08:57:04 -- accel/accel.sh@21 -- # val= 00:06:27.442 08:57:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.442 08:57:04 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@21 -- # val= 00:06:28.838 08:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # IFS=: 00:06:28.838 08:57:05 -- accel/accel.sh@20 -- # read -r var val 00:06:28.838 08:57:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.838 08:57:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:28.838 08:57:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.838 00:06:28.838 real 0m2.767s 00:06:28.838 user 0m2.405s 00:06:28.838 sys 0m0.157s 00:06:28.838 08:57:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.838 08:57:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.838 ************************************ 00:06:28.838 END TEST accel_comp 00:06:28.838 ************************************ 00:06:28.838 08:57:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.838 08:57:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.838 08:57:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.838 08:57:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.838 ************************************ 00:06:28.838 START TEST accel_decomp 00:06:28.838 ************************************ 00:06:28.838 08:57:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.838 08:57:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.838 08:57:05 -- accel/accel.sh@17 -- # local accel_module 00:06:28.838 08:57:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.838 08:57:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:28.838 08:57:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.838 08:57:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.838 08:57:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.838 08:57:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.838 08:57:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.838 08:57:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.838 08:57:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.838 08:57:05 -- accel/accel.sh@42 -- # jq -r . 00:06:28.838 [2024-11-17 08:57:05.468923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.838 [2024-11-17 08:57:05.469039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56883 ] 00:06:28.838 [2024-11-17 08:57:05.596410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.838 [2024-11-17 08:57:05.649323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.217 08:57:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:30.217 00:06:30.217 SPDK Configuration: 00:06:30.217 Core mask: 0x1 00:06:30.217 00:06:30.217 Accel Perf Configuration: 00:06:30.217 Workload Type: decompress 00:06:30.217 Transfer size: 4096 bytes 00:06:30.217 Vector count 1 00:06:30.217 Module: software 00:06:30.217 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.217 Queue depth: 32 00:06:30.217 Allocate depth: 32 00:06:30.217 # threads/core: 1 00:06:30.217 Run time: 1 seconds 00:06:30.217 Verify: Yes 00:06:30.217 00:06:30.217 Running for 1 seconds... 00:06:30.217 00:06:30.217 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.217 ------------------------------------------------------------------------------------ 00:06:30.217 0,0 80000/s 147 MiB/s 0 0 00:06:30.217 ==================================================================================== 00:06:30.217 Total 80000/s 312 MiB/s 0 0' 00:06:30.217 08:57:06 -- accel/accel.sh@20 -- # IFS=: 00:06:30.217 08:57:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.217 08:57:06 -- accel/accel.sh@20 -- # read -r var val 00:06:30.217 08:57:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.217 08:57:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.217 08:57:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.217 08:57:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.217 08:57:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.217 08:57:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.217 08:57:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.217 08:57:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.217 08:57:06 -- accel/accel.sh@42 -- # jq -r . 00:06:30.218 [2024-11-17 08:57:06.822790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.218 [2024-11-17 08:57:06.822876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56907 ] 00:06:30.218 [2024-11-17 08:57:06.951486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.218 [2024-11-17 08:57:07.000071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=0x1 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=decompress 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=software 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=32 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=32 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=1 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val=Yes 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.218 08:57:07 -- accel/accel.sh@21 -- # val= 00:06:30.218 08:57:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.218 08:57:07 -- accel/accel.sh@20 -- # read -r var val 00:06:31.596 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.596 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.596 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.596 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.596 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.597 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.597 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.597 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.597 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.597 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.597 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.597 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.597 08:57:08 -- accel/accel.sh@21 -- # val= 00:06:31.597 08:57:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # IFS=: 00:06:31.597 08:57:08 -- accel/accel.sh@20 -- # read -r var val 00:06:31.597 08:57:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.597 08:57:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.597 08:57:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.597 00:06:31.597 real 0m2.715s 00:06:31.597 user 0m2.387s 00:06:31.597 sys 0m0.125s 00:06:31.597 ************************************ 00:06:31.597 END TEST accel_decomp 00:06:31.597 ************************************ 00:06:31.597 08:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.597 08:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:31.597 08:57:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.597 08:57:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:31.597 08:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.597 08:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:31.597 ************************************ 00:06:31.597 START TEST accel_decmop_full 00:06:31.597 ************************************ 00:06:31.597 08:57:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.597 08:57:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.597 08:57:08 -- accel/accel.sh@17 -- # local accel_module 00:06:31.597 08:57:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.597 08:57:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.597 08:57:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.597 08:57:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.597 08:57:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.597 08:57:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.597 08:57:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.597 08:57:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.597 08:57:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.597 08:57:08 -- accel/accel.sh@42 -- # jq -r . 00:06:31.597 [2024-11-17 08:57:08.235310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.597 [2024-11-17 08:57:08.235423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:06:31.597 [2024-11-17 08:57:08.372725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.597 [2024-11-17 08:57:08.420841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.975 08:57:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:32.975 00:06:32.975 SPDK Configuration: 00:06:32.975 Core mask: 0x1 00:06:32.975 00:06:32.975 Accel Perf Configuration: 00:06:32.975 Workload Type: decompress 00:06:32.975 Transfer size: 111250 bytes 00:06:32.975 Vector count 1 00:06:32.975 Module: software 00:06:32.975 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.975 Queue depth: 32 00:06:32.975 Allocate depth: 32 00:06:32.975 # threads/core: 1 00:06:32.975 Run time: 1 seconds 00:06:32.975 Verify: Yes 00:06:32.975 00:06:32.975 Running for 1 seconds... 00:06:32.975 00:06:32.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.975 ------------------------------------------------------------------------------------ 00:06:32.975 0,0 5312/s 219 MiB/s 0 0 00:06:32.975 ==================================================================================== 00:06:32.975 Total 5312/s 563 MiB/s 0 0' 00:06:32.975 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.975 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.975 08:57:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.975 08:57:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.975 08:57:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.975 08:57:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.975 08:57:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.976 08:57:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.976 08:57:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.976 08:57:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.976 08:57:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.976 08:57:09 -- accel/accel.sh@42 -- # jq -r . 00:06:32.976 [2024-11-17 08:57:09.605234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.976 [2024-11-17 08:57:09.605340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56951 ] 00:06:32.976 [2024-11-17 08:57:09.738516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.976 [2024-11-17 08:57:09.787068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=0x1 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=decompress 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=software 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=32 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=32 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=1 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val=Yes 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.976 08:57:09 -- accel/accel.sh@21 -- # val= 00:06:32.976 08:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.976 08:57:09 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@21 -- # val= 00:06:34.354 08:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # IFS=: 00:06:34.354 08:57:10 -- accel/accel.sh@20 -- # read -r var val 00:06:34.354 08:57:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.354 08:57:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:34.354 08:57:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.354 00:06:34.354 real 0m2.739s 00:06:34.354 user 0m2.388s 00:06:34.354 sys 0m0.147s 00:06:34.354 08:57:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.354 ************************************ 00:06:34.354 END TEST accel_decmop_full 00:06:34.354 ************************************ 00:06:34.354 08:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:34.354 08:57:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.354 08:57:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:34.354 08:57:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.354 08:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:34.354 ************************************ 00:06:34.354 START TEST accel_decomp_mcore 00:06:34.354 ************************************ 00:06:34.354 08:57:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.354 08:57:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.354 08:57:11 -- accel/accel.sh@17 -- # local accel_module 00:06:34.354 08:57:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.354 08:57:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.354 08:57:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.354 08:57:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.354 08:57:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.354 08:57:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.354 08:57:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.354 08:57:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.354 08:57:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.354 08:57:11 -- accel/accel.sh@42 -- # jq -r . 00:06:34.354 [2024-11-17 08:57:11.022325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.354 [2024-11-17 08:57:11.022426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56990 ] 00:06:34.354 [2024-11-17 08:57:11.158084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.354 [2024-11-17 08:57:11.209412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.354 [2024-11-17 08:57:11.209618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.354 [2024-11-17 08:57:11.210962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.354 [2024-11-17 08:57:11.211019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.731 08:57:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:35.731 00:06:35.731 SPDK Configuration: 00:06:35.731 Core mask: 0xf 00:06:35.731 00:06:35.731 Accel Perf Configuration: 00:06:35.731 Workload Type: decompress 00:06:35.731 Transfer size: 4096 bytes 00:06:35.731 Vector count 1 00:06:35.731 Module: software 00:06:35.731 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.731 Queue depth: 32 00:06:35.731 Allocate depth: 32 00:06:35.731 # threads/core: 1 00:06:35.731 Run time: 1 seconds 00:06:35.731 Verify: Yes 00:06:35.731 00:06:35.731 Running for 1 seconds... 00:06:35.731 00:06:35.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.731 ------------------------------------------------------------------------------------ 00:06:35.731 0,0 64128/s 118 MiB/s 0 0 00:06:35.731 3,0 60608/s 111 MiB/s 0 0 00:06:35.731 2,0 62240/s 114 MiB/s 0 0 00:06:35.731 1,0 62080/s 114 MiB/s 0 0 00:06:35.731 ==================================================================================== 00:06:35.731 Total 249056/s 972 MiB/s 0 0' 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:35.731 08:57:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.731 08:57:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:35.731 08:57:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.731 08:57:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.731 08:57:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.731 08:57:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.731 08:57:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.731 08:57:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.731 08:57:12 -- accel/accel.sh@42 -- # jq -r . 00:06:35.731 [2024-11-17 08:57:12.413168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.731 [2024-11-17 08:57:12.413292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57008 ] 00:06:35.731 [2024-11-17 08:57:12.547282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.731 [2024-11-17 08:57:12.598278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.731 [2024-11-17 08:57:12.598385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.731 [2024-11-17 08:57:12.598547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.731 [2024-11-17 08:57:12.598554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=0xf 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=decompress 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=software 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=32 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=32 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=1 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val=Yes 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.731 08:57:12 -- accel/accel.sh@21 -- # val= 00:06:35.731 08:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.731 08:57:12 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@21 -- # val= 00:06:37.107 08:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # IFS=: 00:06:37.107 08:57:13 -- accel/accel.sh@20 -- # read -r var val 00:06:37.107 08:57:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.107 08:57:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.107 08:57:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.107 00:06:37.107 real 0m2.763s 00:06:37.107 user 0m8.830s 00:06:37.107 sys 0m0.182s 00:06:37.107 08:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.107 08:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:37.107 ************************************ 00:06:37.107 END TEST accel_decomp_mcore 00:06:37.107 ************************************ 00:06:37.107 08:57:13 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.107 08:57:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:37.107 08:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.107 08:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:37.107 ************************************ 00:06:37.107 START TEST accel_decomp_full_mcore 00:06:37.107 ************************************ 00:06:37.107 08:57:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.107 08:57:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.107 08:57:13 -- accel/accel.sh@17 -- # local accel_module 00:06:37.107 08:57:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.107 08:57:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.107 08:57:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.107 08:57:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.107 08:57:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.107 08:57:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.107 08:57:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.107 08:57:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.107 08:57:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.107 08:57:13 -- accel/accel.sh@42 -- # jq -r . 00:06:37.107 [2024-11-17 08:57:13.824486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.107 [2024-11-17 08:57:13.824572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57046 ] 00:06:37.107 [2024-11-17 08:57:13.955002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.107 [2024-11-17 08:57:14.006023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.107 [2024-11-17 08:57:14.006189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.107 [2024-11-17 08:57:14.006298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.107 [2024-11-17 08:57:14.006301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.481 08:57:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:38.481 00:06:38.481 SPDK Configuration: 00:06:38.481 Core mask: 0xf 00:06:38.481 00:06:38.481 Accel Perf Configuration: 00:06:38.481 Workload Type: decompress 00:06:38.481 Transfer size: 111250 bytes 00:06:38.481 Vector count 1 00:06:38.481 Module: software 00:06:38.481 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.481 Queue depth: 32 00:06:38.481 Allocate depth: 32 00:06:38.481 # threads/core: 1 00:06:38.481 Run time: 1 seconds 00:06:38.481 Verify: Yes 00:06:38.481 00:06:38.481 Running for 1 seconds... 00:06:38.481 00:06:38.481 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.481 ------------------------------------------------------------------------------------ 00:06:38.481 0,0 4832/s 199 MiB/s 0 0 00:06:38.481 3,0 4832/s 199 MiB/s 0 0 00:06:38.481 2,0 4864/s 200 MiB/s 0 0 00:06:38.481 1,0 4864/s 200 MiB/s 0 0 00:06:38.481 ==================================================================================== 00:06:38.481 Total 19392/s 2057 MiB/s 0 0' 00:06:38.481 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.481 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.481 08:57:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.481 08:57:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.481 08:57:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.481 08:57:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.481 08:57:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.481 08:57:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.481 08:57:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.481 08:57:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.481 08:57:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.481 08:57:15 -- accel/accel.sh@42 -- # jq -r . 00:06:38.481 [2024-11-17 08:57:15.207499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.481 [2024-11-17 08:57:15.207618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57063 ] 00:06:38.481 [2024-11-17 08:57:15.338405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.481 [2024-11-17 08:57:15.388573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.481 [2024-11-17 08:57:15.388687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.481 [2024-11-17 08:57:15.388792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.481 [2024-11-17 08:57:15.388795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=0xf 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=decompress 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=software 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=32 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=32 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=1 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val=Yes 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.797 08:57:15 -- accel/accel.sh@21 -- # val= 00:06:38.797 08:57:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.797 08:57:15 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@21 -- # val= 00:06:39.728 08:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.728 08:57:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.728 08:57:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.728 08:57:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:39.728 08:57:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.728 00:06:39.728 real 0m2.772s 00:06:39.728 user 0m8.928s 00:06:39.728 sys 0m0.160s 00:06:39.728 08:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.728 ************************************ 00:06:39.728 END TEST accel_decomp_full_mcore 00:06:39.728 08:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 08:57:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:39.728 08:57:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:39.728 08:57:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.728 08:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 START TEST accel_decomp_mthread 00:06:39.728 ************************************ 00:06:39.728 08:57:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:39.728 08:57:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.728 08:57:16 -- accel/accel.sh@17 -- # local accel_module 00:06:39.728 08:57:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:39.728 08:57:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:39.728 08:57:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.728 08:57:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.728 08:57:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.728 08:57:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.728 08:57:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.728 08:57:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.728 08:57:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.728 08:57:16 -- accel/accel.sh@42 -- # jq -r . 00:06:39.728 [2024-11-17 08:57:16.645454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.728 [2024-11-17 08:57:16.645586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57102 ] 00:06:39.987 [2024-11-17 08:57:16.775431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.987 [2024-11-17 08:57:16.824858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.365 08:57:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:41.365 00:06:41.365 SPDK Configuration: 00:06:41.365 Core mask: 0x1 00:06:41.365 00:06:41.365 Accel Perf Configuration: 00:06:41.365 Workload Type: decompress 00:06:41.365 Transfer size: 4096 bytes 00:06:41.365 Vector count 1 00:06:41.365 Module: software 00:06:41.365 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.365 Queue depth: 32 00:06:41.365 Allocate depth: 32 00:06:41.365 # threads/core: 2 00:06:41.365 Run time: 1 seconds 00:06:41.365 Verify: Yes 00:06:41.365 00:06:41.365 Running for 1 seconds... 00:06:41.365 00:06:41.365 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.365 ------------------------------------------------------------------------------------ 00:06:41.365 0,1 39744/s 73 MiB/s 0 0 00:06:41.365 0,0 39616/s 73 MiB/s 0 0 00:06:41.365 ==================================================================================== 00:06:41.365 Total 79360/s 310 MiB/s 0 0' 00:06:41.365 08:57:17 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:17 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:41.365 08:57:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.365 08:57:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:41.365 08:57:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.365 08:57:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.365 08:57:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.365 08:57:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.365 08:57:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.365 08:57:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.365 08:57:17 -- accel/accel.sh@42 -- # jq -r . 00:06:41.365 [2024-11-17 08:57:18.010255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.365 [2024-11-17 08:57:18.010357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57122 ] 00:06:41.365 [2024-11-17 08:57:18.145249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.365 [2024-11-17 08:57:18.193496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=0x1 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=decompress 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=software 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=32 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=32 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=2 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val=Yes 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:41.365 08:57:18 -- accel/accel.sh@21 -- # val= 00:06:41.365 08:57:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # IFS=: 00:06:41.365 08:57:18 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.743 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.743 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.743 08:57:19 -- accel/accel.sh@21 -- # val= 00:06:42.744 08:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.744 08:57:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.744 08:57:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.744 08:57:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.744 08:57:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:42.744 08:57:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.744 00:06:42.744 real 0m2.727s 00:06:42.744 user 0m2.402s 00:06:42.744 sys 0m0.122s 00:06:42.744 08:57:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.744 08:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.744 ************************************ 00:06:42.744 END TEST accel_decomp_mthread 00:06:42.744 ************************************ 00:06:42.744 08:57:19 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.744 08:57:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:42.744 08:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.744 08:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.744 ************************************ 00:06:42.744 START TEST accel_deomp_full_mthread 00:06:42.744 ************************************ 00:06:42.744 08:57:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.744 08:57:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.744 08:57:19 -- accel/accel.sh@17 -- # local accel_module 00:06:42.744 08:57:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.744 08:57:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.744 08:57:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.744 08:57:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.744 08:57:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.744 08:57:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.744 08:57:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.744 08:57:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.744 08:57:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.744 08:57:19 -- accel/accel.sh@42 -- # jq -r . 00:06:42.744 [2024-11-17 08:57:19.422782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.744 [2024-11-17 08:57:19.423318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57152 ] 00:06:42.744 [2024-11-17 08:57:19.561076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.744 [2024-11-17 08:57:19.609019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.122 08:57:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.122 00:06:44.122 SPDK Configuration: 00:06:44.122 Core mask: 0x1 00:06:44.122 00:06:44.122 Accel Perf Configuration: 00:06:44.122 Workload Type: decompress 00:06:44.122 Transfer size: 111250 bytes 00:06:44.122 Vector count 1 00:06:44.122 Module: software 00:06:44.122 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.122 Queue depth: 32 00:06:44.122 Allocate depth: 32 00:06:44.122 # threads/core: 2 00:06:44.122 Run time: 1 seconds 00:06:44.122 Verify: Yes 00:06:44.122 00:06:44.122 Running for 1 seconds... 00:06:44.122 00:06:44.122 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.122 ------------------------------------------------------------------------------------ 00:06:44.122 0,1 2688/s 111 MiB/s 0 0 00:06:44.122 0,0 2688/s 111 MiB/s 0 0 00:06:44.122 ==================================================================================== 00:06:44.122 Total 5376/s 570 MiB/s 0 0' 00:06:44.122 08:57:20 -- accel/accel.sh@20 -- # IFS=: 00:06:44.122 08:57:20 -- accel/accel.sh@20 -- # read -r var val 00:06:44.122 08:57:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.122 08:57:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.122 08:57:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:44.122 08:57:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.122 08:57:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.122 08:57:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.122 08:57:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.122 08:57:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.122 08:57:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.122 08:57:20 -- accel/accel.sh@42 -- # jq -r . 00:06:44.122 [2024-11-17 08:57:20.816471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.122 [2024-11-17 08:57:20.816576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57170 ] 00:06:44.122 [2024-11-17 08:57:20.948311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.122 [2024-11-17 08:57:20.996673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.122 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.122 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.122 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.122 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.122 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.122 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.122 08:57:21 -- accel/accel.sh@21 -- # val=0x1 00:06:44.122 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.122 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=decompress 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=software 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=32 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=32 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=2 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val=Yes 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:44.123 08:57:21 -- accel/accel.sh@21 -- # val= 00:06:44.123 08:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # IFS=: 00:06:44.123 08:57:21 -- accel/accel.sh@20 -- # read -r var val 00:06:45.501 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@21 -- # val= 00:06:45.502 08:57:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.502 08:57:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.502 08:57:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.502 08:57:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:45.502 08:57:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.502 00:06:45.502 real 0m2.775s 00:06:45.502 user 0m2.426s 00:06:45.502 sys 0m0.143s 00:06:45.502 08:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.502 ************************************ 00:06:45.502 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.502 END TEST accel_deomp_full_mthread 00:06:45.502 ************************************ 00:06:45.502 08:57:22 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:45.502 08:57:22 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.502 08:57:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:45.502 08:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.502 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.502 08:57:22 -- accel/accel.sh@129 -- # build_accel_config 00:06:45.502 08:57:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.502 08:57:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.502 08:57:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.502 08:57:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.502 08:57:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.502 08:57:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.502 08:57:22 -- accel/accel.sh@42 -- # jq -r . 00:06:45.502 ************************************ 00:06:45.502 START TEST accel_dif_functional_tests 00:06:45.502 ************************************ 00:06:45.502 08:57:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.502 [2024-11-17 08:57:22.274138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.502 [2024-11-17 08:57:22.274237] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57206 ] 00:06:45.502 [2024-11-17 08:57:22.411020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.761 [2024-11-17 08:57:22.466916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.761 [2024-11-17 08:57:22.467049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.761 [2024-11-17 08:57:22.467051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.761 00:06:45.761 00:06:45.761 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.761 http://cunit.sourceforge.net/ 00:06:45.761 00:06:45.761 00:06:45.761 Suite: accel_dif 00:06:45.761 Test: verify: DIF generated, GUARD check ...passed 00:06:45.761 Test: verify: DIF generated, APPTAG check ...passed 00:06:45.761 Test: verify: DIF generated, REFTAG check ...passed 00:06:45.761 Test: verify: DIF not generated, GUARD check ...passed 00:06:45.761 Test: verify: DIF not generated, APPTAG check ...[2024-11-17 08:57:22.518083] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.761 [2024-11-17 08:57:22.518173] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.761 [2024-11-17 08:57:22.518214] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.761 passed 00:06:45.761 Test: verify: DIF not generated, REFTAG check ...passed 00:06:45.761 Test: verify: APPTAG correct, APPTAG check ...[2024-11-17 08:57:22.518243] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.761 [2024-11-17 08:57:22.518266] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.761 [2024-11-17 08:57:22.518291] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.761 passed 00:06:45.761 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-17 08:57:22.518428] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:45.761 passed 00:06:45.761 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:45.761 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:45.761 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:45.761 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-17 08:57:22.518823] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:45.761 passed 00:06:45.761 Test: generate copy: DIF generated, GUARD check ...passed 00:06:45.761 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:45.761 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:45.761 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:45.761 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:45.761 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:45.761 Test: generate copy: iovecs-len validate ...passed 00:06:45.761 Test: generate copy: buffer alignment validate ...[2024-11-17 08:57:22.519315] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:45.761 passed 00:06:45.761 00:06:45.761 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.761 suites 1 1 n/a 0 0 00:06:45.761 tests 20 20 20 0 0 00:06:45.761 asserts 204 204 204 0 n/a 00:06:45.761 00:06:45.761 Elapsed time = 0.003 seconds 00:06:45.761 00:06:45.761 real 0m0.454s 00:06:45.761 user 0m0.512s 00:06:45.761 sys 0m0.111s 00:06:45.761 08:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.761 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.761 ************************************ 00:06:45.761 END TEST accel_dif_functional_tests 00:06:45.761 ************************************ 00:06:46.021 ************************************ 00:06:46.021 END TEST accel 00:06:46.021 ************************************ 00:06:46.021 00:06:46.021 real 0m58.714s 00:06:46.021 user 1m4.063s 00:06:46.021 sys 0m4.141s 00:06:46.021 08:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.021 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:46.021 08:57:22 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:46.021 08:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.021 08:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.021 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:46.021 ************************************ 00:06:46.021 START TEST accel_rpc 00:06:46.021 ************************************ 00:06:46.021 08:57:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:46.021 * Looking for test storage... 00:06:46.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:46.021 08:57:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:46.021 08:57:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:46.021 08:57:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:46.021 08:57:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:46.021 08:57:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:46.021 08:57:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:46.021 08:57:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:46.021 08:57:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:46.021 08:57:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:46.021 08:57:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.021 08:57:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:46.021 08:57:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:46.021 08:57:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:46.021 08:57:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:46.021 08:57:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:46.021 08:57:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:46.021 08:57:22 -- scripts/common.sh@344 -- # : 1 00:06:46.021 08:57:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:46.021 08:57:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.021 08:57:22 -- scripts/common.sh@364 -- # decimal 1 00:06:46.021 08:57:22 -- scripts/common.sh@352 -- # local d=1 00:06:46.021 08:57:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.021 08:57:22 -- scripts/common.sh@354 -- # echo 1 00:06:46.021 08:57:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:46.021 08:57:22 -- scripts/common.sh@365 -- # decimal 2 00:06:46.021 08:57:22 -- scripts/common.sh@352 -- # local d=2 00:06:46.021 08:57:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.021 08:57:22 -- scripts/common.sh@354 -- # echo 2 00:06:46.021 08:57:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:46.021 08:57:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:46.021 08:57:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:46.280 08:57:22 -- scripts/common.sh@367 -- # return 0 00:06:46.280 08:57:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.280 08:57:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.280 --rc genhtml_branch_coverage=1 00:06:46.280 --rc genhtml_function_coverage=1 00:06:46.280 --rc genhtml_legend=1 00:06:46.280 --rc geninfo_all_blocks=1 00:06:46.280 --rc geninfo_unexecuted_blocks=1 00:06:46.280 00:06:46.280 ' 00:06:46.280 08:57:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.280 --rc genhtml_branch_coverage=1 00:06:46.280 --rc genhtml_function_coverage=1 00:06:46.280 --rc genhtml_legend=1 00:06:46.280 --rc geninfo_all_blocks=1 00:06:46.280 --rc geninfo_unexecuted_blocks=1 00:06:46.280 00:06:46.280 ' 00:06:46.280 08:57:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.280 --rc genhtml_branch_coverage=1 00:06:46.280 --rc genhtml_function_coverage=1 00:06:46.280 --rc genhtml_legend=1 00:06:46.280 --rc geninfo_all_blocks=1 00:06:46.280 --rc geninfo_unexecuted_blocks=1 00:06:46.280 00:06:46.280 ' 00:06:46.280 08:57:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:46.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.280 --rc genhtml_branch_coverage=1 00:06:46.280 --rc genhtml_function_coverage=1 00:06:46.280 --rc genhtml_legend=1 00:06:46.280 --rc geninfo_all_blocks=1 00:06:46.280 --rc geninfo_unexecuted_blocks=1 00:06:46.280 00:06:46.280 ' 00:06:46.280 08:57:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.280 08:57:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57283 00:06:46.280 08:57:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 57283 00:06:46.280 08:57:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:46.280 08:57:22 -- common/autotest_common.sh@829 -- # '[' -z 57283 ']' 00:06:46.280 08:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.280 08:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.280 08:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.280 08:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.280 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:46.280 [2024-11-17 08:57:23.011905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.280 [2024-11-17 08:57:23.012015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57283 ] 00:06:46.280 [2024-11-17 08:57:23.149637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.280 [2024-11-17 08:57:23.200016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.280 [2024-11-17 08:57:23.200172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.539 08:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.539 08:57:23 -- common/autotest_common.sh@862 -- # return 0 00:06:46.539 08:57:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:46.539 08:57:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:46.539 08:57:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:46.539 08:57:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:46.540 08:57:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.540 08:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 ************************************ 00:06:46.540 START TEST accel_assign_opcode 00:06:46.540 ************************************ 00:06:46.540 08:57:23 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:46.540 08:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 [2024-11-17 08:57:23.260545] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:46.540 08:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:46.540 08:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 [2024-11-17 08:57:23.268544] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:46.540 08:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:46.540 08:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 08:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:46.540 08:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:46.540 08:57:23 -- accel/accel_rpc.sh@42 -- # grep software 00:06:46.540 08:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.540 software 00:06:46.540 00:06:46.540 real 0m0.199s 00:06:46.540 user 0m0.048s 00:06:46.540 sys 0m0.013s 00:06:46.540 08:57:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.540 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 ************************************ 00:06:46.540 END TEST accel_assign_opcode 00:06:46.540 ************************************ 00:06:46.799 08:57:23 -- accel/accel_rpc.sh@55 -- # killprocess 57283 00:06:46.800 08:57:23 -- common/autotest_common.sh@936 -- # '[' -z 57283 ']' 00:06:46.800 08:57:23 -- common/autotest_common.sh@940 -- # kill -0 57283 00:06:46.800 08:57:23 -- common/autotest_common.sh@941 -- # uname 00:06:46.800 08:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.800 08:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57283 00:06:46.800 08:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.800 08:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.800 killing process with pid 57283 00:06:46.800 08:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57283' 00:06:46.800 08:57:23 -- common/autotest_common.sh@955 -- # kill 57283 00:06:46.800 08:57:23 -- common/autotest_common.sh@960 -- # wait 57283 00:06:47.059 00:06:47.059 real 0m1.035s 00:06:47.059 user 0m1.011s 00:06:47.059 sys 0m0.337s 00:06:47.059 08:57:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.059 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:47.059 ************************************ 00:06:47.059 END TEST accel_rpc 00:06:47.059 ************************************ 00:06:47.059 08:57:23 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.059 08:57:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.059 08:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.059 08:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:47.059 ************************************ 00:06:47.059 START TEST app_cmdline 00:06:47.059 ************************************ 00:06:47.059 08:57:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.059 * Looking for test storage... 00:06:47.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.059 08:57:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.059 08:57:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.059 08:57:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.318 08:57:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.318 08:57:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.318 08:57:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.318 08:57:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.318 08:57:24 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.318 08:57:24 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.318 08:57:24 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.318 08:57:24 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.318 08:57:24 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.318 08:57:24 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.318 08:57:24 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.318 08:57:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.318 08:57:24 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.318 08:57:24 -- scripts/common.sh@344 -- # : 1 00:06:47.318 08:57:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.318 08:57:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.318 08:57:24 -- scripts/common.sh@364 -- # decimal 1 00:06:47.318 08:57:24 -- scripts/common.sh@352 -- # local d=1 00:06:47.318 08:57:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.318 08:57:24 -- scripts/common.sh@354 -- # echo 1 00:06:47.318 08:57:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.318 08:57:24 -- scripts/common.sh@365 -- # decimal 2 00:06:47.318 08:57:24 -- scripts/common.sh@352 -- # local d=2 00:06:47.318 08:57:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.318 08:57:24 -- scripts/common.sh@354 -- # echo 2 00:06:47.318 08:57:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.318 08:57:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.318 08:57:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.318 08:57:24 -- scripts/common.sh@367 -- # return 0 00:06:47.318 08:57:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.318 08:57:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.318 --rc genhtml_branch_coverage=1 00:06:47.318 --rc genhtml_function_coverage=1 00:06:47.318 --rc genhtml_legend=1 00:06:47.318 --rc geninfo_all_blocks=1 00:06:47.318 --rc geninfo_unexecuted_blocks=1 00:06:47.318 00:06:47.318 ' 00:06:47.318 08:57:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.318 --rc genhtml_branch_coverage=1 00:06:47.318 --rc genhtml_function_coverage=1 00:06:47.318 --rc genhtml_legend=1 00:06:47.318 --rc geninfo_all_blocks=1 00:06:47.318 --rc geninfo_unexecuted_blocks=1 00:06:47.318 00:06:47.318 ' 00:06:47.318 08:57:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.318 --rc genhtml_branch_coverage=1 00:06:47.318 --rc genhtml_function_coverage=1 00:06:47.318 --rc genhtml_legend=1 00:06:47.318 --rc geninfo_all_blocks=1 00:06:47.318 --rc geninfo_unexecuted_blocks=1 00:06:47.318 00:06:47.318 ' 00:06:47.318 08:57:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.318 --rc genhtml_branch_coverage=1 00:06:47.318 --rc genhtml_function_coverage=1 00:06:47.318 --rc genhtml_legend=1 00:06:47.318 --rc geninfo_all_blocks=1 00:06:47.318 --rc geninfo_unexecuted_blocks=1 00:06:47.318 00:06:47.318 ' 00:06:47.318 08:57:24 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.318 08:57:24 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57371 00:06:47.318 08:57:24 -- app/cmdline.sh@18 -- # waitforlisten 57371 00:06:47.318 08:57:24 -- common/autotest_common.sh@829 -- # '[' -z 57371 ']' 00:06:47.318 08:57:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.318 08:57:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.318 08:57:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.318 08:57:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.318 08:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.318 08:57:24 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.318 [2024-11-17 08:57:24.088675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.318 [2024-11-17 08:57:24.088781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:06:47.318 [2024-11-17 08:57:24.226514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.577 [2024-11-17 08:57:24.277424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.577 [2024-11-17 08:57:24.277649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.145 08:57:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.145 08:57:25 -- common/autotest_common.sh@862 -- # return 0 00:06:48.145 08:57:25 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:48.404 { 00:06:48.404 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:48.404 "fields": { 00:06:48.404 "major": 24, 00:06:48.404 "minor": 1, 00:06:48.404 "patch": 1, 00:06:48.404 "suffix": "-pre", 00:06:48.404 "commit": "c13c99a5e" 00:06:48.404 } 00:06:48.404 } 00:06:48.404 08:57:25 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.404 08:57:25 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.404 08:57:25 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.404 08:57:25 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.404 08:57:25 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.404 08:57:25 -- app/cmdline.sh@26 -- # sort 00:06:48.404 08:57:25 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.404 08:57:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.404 08:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.404 08:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.404 08:57:25 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.404 08:57:25 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.404 08:57:25 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.404 08:57:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.404 08:57:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.404 08:57:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.404 08:57:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.404 08:57:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.404 08:57:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.404 08:57:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.404 08:57:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.404 08:57:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.404 08:57:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:48.404 08:57:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.022 request: 00:06:49.022 { 00:06:49.022 "method": "env_dpdk_get_mem_stats", 00:06:49.022 "req_id": 1 00:06:49.022 } 00:06:49.022 Got JSON-RPC error response 00:06:49.022 response: 00:06:49.022 { 00:06:49.022 "code": -32601, 00:06:49.022 "message": "Method not found" 00:06:49.022 } 00:06:49.022 08:57:25 -- common/autotest_common.sh@653 -- # es=1 00:06:49.022 08:57:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.022 08:57:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.022 08:57:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.022 08:57:25 -- app/cmdline.sh@1 -- # killprocess 57371 00:06:49.022 08:57:25 -- common/autotest_common.sh@936 -- # '[' -z 57371 ']' 00:06:49.022 08:57:25 -- common/autotest_common.sh@940 -- # kill -0 57371 00:06:49.022 08:57:25 -- common/autotest_common.sh@941 -- # uname 00:06:49.022 08:57:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.023 08:57:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57371 00:06:49.023 08:57:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.023 08:57:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.023 killing process with pid 57371 00:06:49.023 08:57:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57371' 00:06:49.023 08:57:25 -- common/autotest_common.sh@955 -- # kill 57371 00:06:49.023 08:57:25 -- common/autotest_common.sh@960 -- # wait 57371 00:06:49.023 00:06:49.023 real 0m2.069s 00:06:49.023 user 0m2.740s 00:06:49.023 sys 0m0.331s 00:06:49.023 08:57:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.023 08:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:49.023 ************************************ 00:06:49.023 END TEST app_cmdline 00:06:49.023 ************************************ 00:06:49.288 08:57:25 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.288 08:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.288 08:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.288 08:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:49.288 ************************************ 00:06:49.288 START TEST version 00:06:49.288 ************************************ 00:06:49.288 08:57:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.288 * Looking for test storage... 00:06:49.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.288 08:57:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.288 08:57:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.288 08:57:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.288 08:57:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.288 08:57:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.288 08:57:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.288 08:57:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.288 08:57:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.288 08:57:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.288 08:57:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.288 08:57:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.288 08:57:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.288 08:57:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.288 08:57:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.288 08:57:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.288 08:57:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.288 08:57:26 -- scripts/common.sh@344 -- # : 1 00:06:49.288 08:57:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.288 08:57:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.288 08:57:26 -- scripts/common.sh@364 -- # decimal 1 00:06:49.288 08:57:26 -- scripts/common.sh@352 -- # local d=1 00:06:49.288 08:57:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.288 08:57:26 -- scripts/common.sh@354 -- # echo 1 00:06:49.288 08:57:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.288 08:57:26 -- scripts/common.sh@365 -- # decimal 2 00:06:49.288 08:57:26 -- scripts/common.sh@352 -- # local d=2 00:06:49.288 08:57:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.288 08:57:26 -- scripts/common.sh@354 -- # echo 2 00:06:49.288 08:57:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.288 08:57:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.288 08:57:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.288 08:57:26 -- scripts/common.sh@367 -- # return 0 00:06:49.288 08:57:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.288 08:57:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.288 --rc genhtml_branch_coverage=1 00:06:49.288 --rc genhtml_function_coverage=1 00:06:49.288 --rc genhtml_legend=1 00:06:49.288 --rc geninfo_all_blocks=1 00:06:49.288 --rc geninfo_unexecuted_blocks=1 00:06:49.288 00:06:49.288 ' 00:06:49.288 08:57:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.288 --rc genhtml_branch_coverage=1 00:06:49.288 --rc genhtml_function_coverage=1 00:06:49.288 --rc genhtml_legend=1 00:06:49.288 --rc geninfo_all_blocks=1 00:06:49.288 --rc geninfo_unexecuted_blocks=1 00:06:49.288 00:06:49.288 ' 00:06:49.288 08:57:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.288 --rc genhtml_branch_coverage=1 00:06:49.288 --rc genhtml_function_coverage=1 00:06:49.288 --rc genhtml_legend=1 00:06:49.288 --rc geninfo_all_blocks=1 00:06:49.288 --rc geninfo_unexecuted_blocks=1 00:06:49.288 00:06:49.288 ' 00:06:49.288 08:57:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.288 --rc genhtml_branch_coverage=1 00:06:49.288 --rc genhtml_function_coverage=1 00:06:49.288 --rc genhtml_legend=1 00:06:49.288 --rc geninfo_all_blocks=1 00:06:49.288 --rc geninfo_unexecuted_blocks=1 00:06:49.288 00:06:49.288 ' 00:06:49.288 08:57:26 -- app/version.sh@17 -- # get_header_version major 00:06:49.288 08:57:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.288 08:57:26 -- app/version.sh@14 -- # cut -f2 00:06:49.288 08:57:26 -- app/version.sh@14 -- # tr -d '"' 00:06:49.288 08:57:26 -- app/version.sh@17 -- # major=24 00:06:49.288 08:57:26 -- app/version.sh@18 -- # get_header_version minor 00:06:49.288 08:57:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.288 08:57:26 -- app/version.sh@14 -- # cut -f2 00:06:49.288 08:57:26 -- app/version.sh@14 -- # tr -d '"' 00:06:49.288 08:57:26 -- app/version.sh@18 -- # minor=1 00:06:49.288 08:57:26 -- app/version.sh@19 -- # get_header_version patch 00:06:49.288 08:57:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.288 08:57:26 -- app/version.sh@14 -- # cut -f2 00:06:49.288 08:57:26 -- app/version.sh@14 -- # tr -d '"' 00:06:49.288 08:57:26 -- app/version.sh@19 -- # patch=1 00:06:49.288 08:57:26 -- app/version.sh@20 -- # get_header_version suffix 00:06:49.288 08:57:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.289 08:57:26 -- app/version.sh@14 -- # cut -f2 00:06:49.289 08:57:26 -- app/version.sh@14 -- # tr -d '"' 00:06:49.289 08:57:26 -- app/version.sh@20 -- # suffix=-pre 00:06:49.289 08:57:26 -- app/version.sh@22 -- # version=24.1 00:06:49.289 08:57:26 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.289 08:57:26 -- app/version.sh@25 -- # version=24.1.1 00:06:49.289 08:57:26 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:49.289 08:57:26 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.289 08:57:26 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.289 08:57:26 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:49.289 08:57:26 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:49.289 00:06:49.289 real 0m0.224s 00:06:49.289 user 0m0.164s 00:06:49.289 sys 0m0.100s 00:06:49.289 08:57:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.289 ************************************ 00:06:49.289 END TEST version 00:06:49.289 ************************************ 00:06:49.289 08:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.548 08:57:26 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:49.548 08:57:26 -- spdk/autotest.sh@191 -- # uname -s 00:06:49.548 08:57:26 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:49.548 08:57:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:49.548 08:57:26 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:06:49.548 08:57:26 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:06:49.548 08:57:26 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:49.548 08:57:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.548 08:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.548 ************************************ 00:06:49.548 START TEST spdk_dd 00:06:49.548 ************************************ 00:06:49.548 08:57:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:49.548 * Looking for test storage... 00:06:49.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.548 08:57:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.548 08:57:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.548 08:57:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.548 08:57:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.548 08:57:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.548 08:57:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.548 08:57:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.548 08:57:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.548 08:57:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.548 08:57:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.548 08:57:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.548 08:57:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.548 08:57:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.548 08:57:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.548 08:57:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.548 08:57:26 -- scripts/common.sh@344 -- # : 1 00:06:49.548 08:57:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.548 08:57:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.548 08:57:26 -- scripts/common.sh@364 -- # decimal 1 00:06:49.548 08:57:26 -- scripts/common.sh@352 -- # local d=1 00:06:49.548 08:57:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.548 08:57:26 -- scripts/common.sh@354 -- # echo 1 00:06:49.548 08:57:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.548 08:57:26 -- scripts/common.sh@365 -- # decimal 2 00:06:49.548 08:57:26 -- scripts/common.sh@352 -- # local d=2 00:06:49.548 08:57:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.548 08:57:26 -- scripts/common.sh@354 -- # echo 2 00:06:49.548 08:57:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.548 08:57:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.548 08:57:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.548 08:57:26 -- scripts/common.sh@367 -- # return 0 00:06:49.548 08:57:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.548 --rc genhtml_branch_coverage=1 00:06:49.548 --rc genhtml_function_coverage=1 00:06:49.548 --rc genhtml_legend=1 00:06:49.548 --rc geninfo_all_blocks=1 00:06:49.548 --rc geninfo_unexecuted_blocks=1 00:06:49.548 00:06:49.548 ' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.548 --rc genhtml_branch_coverage=1 00:06:49.548 --rc genhtml_function_coverage=1 00:06:49.548 --rc genhtml_legend=1 00:06:49.548 --rc geninfo_all_blocks=1 00:06:49.548 --rc geninfo_unexecuted_blocks=1 00:06:49.548 00:06:49.548 ' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.548 --rc genhtml_branch_coverage=1 00:06:49.548 --rc genhtml_function_coverage=1 00:06:49.548 --rc genhtml_legend=1 00:06:49.548 --rc geninfo_all_blocks=1 00:06:49.548 --rc geninfo_unexecuted_blocks=1 00:06:49.548 00:06:49.548 ' 00:06:49.548 08:57:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.548 --rc genhtml_branch_coverage=1 00:06:49.548 --rc genhtml_function_coverage=1 00:06:49.548 --rc genhtml_legend=1 00:06:49.548 --rc geninfo_all_blocks=1 00:06:49.548 --rc geninfo_unexecuted_blocks=1 00:06:49.548 00:06:49.548 ' 00:06:49.548 08:57:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.548 08:57:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.548 08:57:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.548 08:57:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.548 08:57:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.548 08:57:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.548 08:57:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.548 08:57:26 -- paths/export.sh@5 -- # export PATH 00:06:49.548 08:57:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.548 08:57:26 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.118 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.118 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:50.118 08:57:26 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:50.118 08:57:26 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:50.118 08:57:26 -- scripts/common.sh@311 -- # local bdf bdfs 00:06:50.118 08:57:26 -- scripts/common.sh@312 -- # local nvmes 00:06:50.118 08:57:26 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:06:50.118 08:57:26 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:50.118 08:57:26 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:06:50.118 08:57:26 -- scripts/common.sh@297 -- # local bdf= 00:06:50.118 08:57:26 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:06:50.118 08:57:26 -- scripts/common.sh@232 -- # local class 00:06:50.118 08:57:26 -- scripts/common.sh@233 -- # local subclass 00:06:50.118 08:57:26 -- scripts/common.sh@234 -- # local progif 00:06:50.118 08:57:26 -- scripts/common.sh@235 -- # printf %02x 1 00:06:50.118 08:57:26 -- scripts/common.sh@235 -- # class=01 00:06:50.118 08:57:26 -- scripts/common.sh@236 -- # printf %02x 8 00:06:50.118 08:57:26 -- scripts/common.sh@236 -- # subclass=08 00:06:50.118 08:57:26 -- scripts/common.sh@237 -- # printf %02x 2 00:06:50.118 08:57:26 -- scripts/common.sh@237 -- # progif=02 00:06:50.118 08:57:26 -- scripts/common.sh@239 -- # hash lspci 00:06:50.118 08:57:26 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:06:50.118 08:57:26 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:06:50.118 08:57:26 -- scripts/common.sh@242 -- # grep -i -- -p02 00:06:50.118 08:57:26 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:50.118 08:57:26 -- scripts/common.sh@244 -- # tr -d '"' 00:06:50.118 08:57:26 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:50.118 08:57:26 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:06:50.118 08:57:26 -- scripts/common.sh@15 -- # local i 00:06:50.118 08:57:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:06:50.118 08:57:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:50.118 08:57:26 -- scripts/common.sh@24 -- # return 0 00:06:50.118 08:57:26 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:06:50.118 08:57:26 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:50.118 08:57:26 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:06:50.118 08:57:26 -- scripts/common.sh@15 -- # local i 00:06:50.118 08:57:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:06:50.118 08:57:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:50.118 08:57:26 -- scripts/common.sh@24 -- # return 0 00:06:50.118 08:57:26 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:06:50.118 08:57:26 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:50.118 08:57:26 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:06:50.118 08:57:26 -- scripts/common.sh@322 -- # uname -s 00:06:50.118 08:57:26 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:50.118 08:57:26 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:50.118 08:57:26 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:50.118 08:57:26 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:06:50.118 08:57:26 -- scripts/common.sh@322 -- # uname -s 00:06:50.118 08:57:26 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:50.118 08:57:26 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:50.118 08:57:26 -- scripts/common.sh@327 -- # (( 2 )) 00:06:50.118 08:57:26 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:50.118 08:57:26 -- dd/dd.sh@13 -- # check_liburing 00:06:50.118 08:57:26 -- dd/common.sh@139 -- # local lib so 00:06:50.118 08:57:26 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.118 08:57:26 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:50.118 08:57:26 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.118 08:57:26 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.118 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.118 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.118 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.118 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:06:50.118 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.119 08:57:26 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:50.119 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:50.120 08:57:26 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:50.120 08:57:26 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:50.120 * spdk_dd linked to liburing 00:06:50.120 08:57:26 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:50.120 08:57:26 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:50.120 08:57:26 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:50.120 08:57:26 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:50.120 08:57:26 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:50.120 08:57:26 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:50.120 08:57:26 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:50.120 08:57:26 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:50.120 08:57:26 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:50.120 08:57:26 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:50.120 08:57:26 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:50.120 08:57:26 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:50.120 08:57:26 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:50.120 08:57:26 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:50.120 08:57:26 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:50.120 08:57:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:50.120 08:57:26 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:50.120 08:57:26 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:50.120 08:57:26 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:50.120 08:57:26 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:50.120 08:57:26 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:50.120 08:57:26 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:50.120 08:57:26 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:50.120 08:57:26 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:50.120 08:57:26 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:50.120 08:57:26 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:50.120 08:57:26 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:50.120 08:57:26 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:50.120 08:57:26 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:50.120 08:57:26 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:50.120 08:57:26 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:50.120 08:57:26 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:50.120 08:57:26 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:50.120 08:57:26 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:50.120 08:57:26 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:50.120 08:57:26 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:50.120 08:57:26 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:50.120 08:57:26 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:50.120 08:57:26 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:50.120 08:57:26 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:50.120 08:57:26 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:50.120 08:57:26 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:50.120 08:57:26 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:50.120 08:57:26 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:50.120 08:57:26 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:50.120 08:57:26 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:50.120 08:57:26 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:50.120 08:57:26 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:50.120 08:57:26 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:50.120 08:57:26 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:06:50.120 08:57:26 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:50.120 08:57:26 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:50.120 08:57:26 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:50.120 08:57:26 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:50.120 08:57:26 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:50.120 08:57:26 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:50.120 08:57:26 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:50.120 08:57:26 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:50.120 08:57:26 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:50.120 08:57:26 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:50.120 08:57:26 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:50.120 08:57:26 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:50.120 08:57:26 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:50.120 08:57:26 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:50.120 08:57:26 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:50.120 08:57:26 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:50.120 08:57:26 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:50.120 08:57:26 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:50.120 08:57:26 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:50.120 08:57:26 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:50.120 08:57:26 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:50.120 08:57:26 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:50.120 08:57:26 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:50.120 08:57:26 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:50.120 08:57:26 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:06:50.120 08:57:26 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:50.120 08:57:26 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:50.120 08:57:26 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:50.120 08:57:26 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:50.120 08:57:26 -- dd/common.sh@157 -- # return 0 00:06:50.120 08:57:26 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:50.120 08:57:26 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:50.120 08:57:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:50.120 08:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.120 08:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:50.120 ************************************ 00:06:50.120 START TEST spdk_dd_basic_rw 00:06:50.120 ************************************ 00:06:50.120 08:57:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:50.120 * Looking for test storage... 00:06:50.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:50.120 08:57:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.120 08:57:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.120 08:57:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.382 08:57:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.382 08:57:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.382 08:57:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.382 08:57:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.382 08:57:27 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.382 08:57:27 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.382 08:57:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.382 08:57:27 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.382 08:57:27 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.382 08:57:27 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.382 08:57:27 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.382 08:57:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.382 08:57:27 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.382 08:57:27 -- scripts/common.sh@344 -- # : 1 00:06:50.382 08:57:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.382 08:57:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.382 08:57:27 -- scripts/common.sh@364 -- # decimal 1 00:06:50.382 08:57:27 -- scripts/common.sh@352 -- # local d=1 00:06:50.382 08:57:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.382 08:57:27 -- scripts/common.sh@354 -- # echo 1 00:06:50.382 08:57:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.382 08:57:27 -- scripts/common.sh@365 -- # decimal 2 00:06:50.382 08:57:27 -- scripts/common.sh@352 -- # local d=2 00:06:50.382 08:57:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.382 08:57:27 -- scripts/common.sh@354 -- # echo 2 00:06:50.382 08:57:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.382 08:57:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.382 08:57:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.382 08:57:27 -- scripts/common.sh@367 -- # return 0 00:06:50.382 08:57:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.382 08:57:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.382 --rc genhtml_branch_coverage=1 00:06:50.382 --rc genhtml_function_coverage=1 00:06:50.382 --rc genhtml_legend=1 00:06:50.382 --rc geninfo_all_blocks=1 00:06:50.382 --rc geninfo_unexecuted_blocks=1 00:06:50.382 00:06:50.382 ' 00:06:50.382 08:57:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.382 --rc genhtml_branch_coverage=1 00:06:50.382 --rc genhtml_function_coverage=1 00:06:50.382 --rc genhtml_legend=1 00:06:50.382 --rc geninfo_all_blocks=1 00:06:50.382 --rc geninfo_unexecuted_blocks=1 00:06:50.382 00:06:50.382 ' 00:06:50.382 08:57:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.382 --rc genhtml_branch_coverage=1 00:06:50.382 --rc genhtml_function_coverage=1 00:06:50.382 --rc genhtml_legend=1 00:06:50.382 --rc geninfo_all_blocks=1 00:06:50.382 --rc geninfo_unexecuted_blocks=1 00:06:50.382 00:06:50.382 ' 00:06:50.382 08:57:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.382 --rc genhtml_branch_coverage=1 00:06:50.382 --rc genhtml_function_coverage=1 00:06:50.382 --rc genhtml_legend=1 00:06:50.382 --rc geninfo_all_blocks=1 00:06:50.382 --rc geninfo_unexecuted_blocks=1 00:06:50.382 00:06:50.382 ' 00:06:50.382 08:57:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.382 08:57:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.382 08:57:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.382 08:57:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.382 08:57:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.382 08:57:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.382 08:57:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.382 08:57:27 -- paths/export.sh@5 -- # export PATH 00:06:50.382 08:57:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.382 08:57:27 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:50.382 08:57:27 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:50.382 08:57:27 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:50.382 08:57:27 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:06:50.382 08:57:27 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:50.382 08:57:27 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:50.382 08:57:27 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:50.382 08:57:27 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.382 08:57:27 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.382 08:57:27 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:06:50.382 08:57:27 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:06:50.382 08:57:27 -- dd/common.sh@126 -- # mapfile -t id 00:06:50.382 08:57:27 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:06:50.383 08:57:27 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2193 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:50.383 08:57:27 -- dd/common.sh@130 -- # lbaf=04 00:06:50.384 08:57:27 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2193 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:50.384 08:57:27 -- dd/common.sh@132 -- # lbaf=4096 00:06:50.384 08:57:27 -- dd/common.sh@134 -- # echo 4096 00:06:50.384 08:57:27 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:50.384 08:57:27 -- dd/basic_rw.sh@96 -- # : 00:06:50.384 08:57:27 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:50.384 08:57:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:50.384 08:57:27 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:50.384 08:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.384 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.384 08:57:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.384 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.643 ************************************ 00:06:50.643 START TEST dd_bs_lt_native_bs 00:06:50.643 ************************************ 00:06:50.643 08:57:27 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:50.643 08:57:27 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.643 08:57:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:50.643 08:57:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.643 08:57:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.643 08:57:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.643 08:57:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.643 08:57:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.643 08:57:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.643 08:57:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.643 08:57:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.643 08:57:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:50.643 { 00:06:50.643 "subsystems": [ 00:06:50.643 { 00:06:50.643 "subsystem": "bdev", 00:06:50.643 "config": [ 00:06:50.643 { 00:06:50.643 "params": { 00:06:50.643 "trtype": "pcie", 00:06:50.643 "traddr": "0000:00:06.0", 00:06:50.643 "name": "Nvme0" 00:06:50.643 }, 00:06:50.643 "method": "bdev_nvme_attach_controller" 00:06:50.643 }, 00:06:50.643 { 00:06:50.643 "method": "bdev_wait_for_examine" 00:06:50.643 } 00:06:50.643 ] 00:06:50.643 } 00:06:50.643 ] 00:06:50.643 } 00:06:50.643 [2024-11-17 08:57:27.364467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.644 [2024-11-17 08:57:27.364565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57718 ] 00:06:50.644 [2024-11-17 08:57:27.504876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.903 [2024-11-17 08:57:27.574464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.903 [2024-11-17 08:57:27.696744] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:50.903 [2024-11-17 08:57:27.696819] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.903 [2024-11-17 08:57:27.773040] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:51.162 08:57:27 -- common/autotest_common.sh@653 -- # es=234 00:06:51.162 08:57:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.162 08:57:27 -- common/autotest_common.sh@662 -- # es=106 00:06:51.163 08:57:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:51.163 08:57:27 -- common/autotest_common.sh@670 -- # es=1 00:06:51.163 08:57:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.163 00:06:51.163 real 0m0.572s 00:06:51.163 user 0m0.410s 00:06:51.163 sys 0m0.117s 00:06:51.163 08:57:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.163 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:06:51.163 ************************************ 00:06:51.163 END TEST dd_bs_lt_native_bs 00:06:51.163 ************************************ 00:06:51.163 08:57:27 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:51.163 08:57:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.163 08:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.163 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:06:51.163 ************************************ 00:06:51.163 START TEST dd_rw 00:06:51.163 ************************************ 00:06:51.163 08:57:27 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:06:51.163 08:57:27 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:51.163 08:57:27 -- dd/basic_rw.sh@12 -- # local count size 00:06:51.163 08:57:27 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:51.163 08:57:27 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:51.163 08:57:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:51.163 08:57:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:51.163 08:57:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:51.163 08:57:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:51.163 08:57:27 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:51.163 08:57:27 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:51.163 08:57:27 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:51.163 08:57:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.163 08:57:27 -- dd/basic_rw.sh@23 -- # count=15 00:06:51.163 08:57:27 -- dd/basic_rw.sh@24 -- # count=15 00:06:51.163 08:57:27 -- dd/basic_rw.sh@25 -- # size=61440 00:06:51.163 08:57:27 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:51.163 08:57:27 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.163 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:06:51.731 08:57:28 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:51.731 08:57:28 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:51.731 08:57:28 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.731 08:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.731 [2024-11-17 08:57:28.533407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.731 [2024-11-17 08:57:28.533536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57749 ] 00:06:51.731 { 00:06:51.731 "subsystems": [ 00:06:51.731 { 00:06:51.731 "subsystem": "bdev", 00:06:51.731 "config": [ 00:06:51.731 { 00:06:51.731 "params": { 00:06:51.731 "trtype": "pcie", 00:06:51.731 "traddr": "0000:00:06.0", 00:06:51.731 "name": "Nvme0" 00:06:51.731 }, 00:06:51.731 "method": "bdev_nvme_attach_controller" 00:06:51.731 }, 00:06:51.731 { 00:06:51.731 "method": "bdev_wait_for_examine" 00:06:51.731 } 00:06:51.731 ] 00:06:51.731 } 00:06:51.731 ] 00:06:51.731 } 00:06:51.990 [2024-11-17 08:57:28.661025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.990 [2024-11-17 08:57:28.709735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.990  [2024-11-17T08:57:29.180Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:52.250 00:06:52.250 08:57:29 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:52.250 08:57:29 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.250 08:57:29 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.250 08:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.250 [2024-11-17 08:57:29.044416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.250 [2024-11-17 08:57:29.044515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57761 ] 00:06:52.250 { 00:06:52.250 "subsystems": [ 00:06:52.250 { 00:06:52.250 "subsystem": "bdev", 00:06:52.250 "config": [ 00:06:52.250 { 00:06:52.250 "params": { 00:06:52.250 "trtype": "pcie", 00:06:52.250 "traddr": "0000:00:06.0", 00:06:52.250 "name": "Nvme0" 00:06:52.250 }, 00:06:52.250 "method": "bdev_nvme_attach_controller" 00:06:52.250 }, 00:06:52.250 { 00:06:52.250 "method": "bdev_wait_for_examine" 00:06:52.250 } 00:06:52.250 ] 00:06:52.250 } 00:06:52.250 ] 00:06:52.250 } 00:06:52.250 [2024-11-17 08:57:29.174382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.510 [2024-11-17 08:57:29.228086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.510  [2024-11-17T08:57:29.699Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:52.769 00:06:52.769 08:57:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.769 08:57:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:52.769 08:57:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.769 08:57:29 -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.769 08:57:29 -- dd/common.sh@12 -- # local size=61440 00:06:52.769 08:57:29 -- dd/common.sh@14 -- # local bs=1048576 00:06:52.769 08:57:29 -- dd/common.sh@15 -- # local count=1 00:06:52.769 08:57:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:52.769 08:57:29 -- dd/common.sh@18 -- # gen_conf 00:06:52.769 08:57:29 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.769 08:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 [2024-11-17 08:57:29.569156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.769 [2024-11-17 08:57:29.569259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:06:52.769 { 00:06:52.769 "subsystems": [ 00:06:52.769 { 00:06:52.769 "subsystem": "bdev", 00:06:52.769 "config": [ 00:06:52.769 { 00:06:52.769 "params": { 00:06:52.769 "trtype": "pcie", 00:06:52.769 "traddr": "0000:00:06.0", 00:06:52.769 "name": "Nvme0" 00:06:52.769 }, 00:06:52.769 "method": "bdev_nvme_attach_controller" 00:06:52.769 }, 00:06:52.769 { 00:06:52.769 "method": "bdev_wait_for_examine" 00:06:52.769 } 00:06:52.769 ] 00:06:52.769 } 00:06:52.769 ] 00:06:52.769 } 00:06:53.028 [2024-11-17 08:57:29.706291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.029 [2024-11-17 08:57:29.754682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.029  [2024-11-17T08:57:30.218Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:53.288 00:06:53.288 08:57:30 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.288 08:57:30 -- dd/basic_rw.sh@23 -- # count=15 00:06:53.288 08:57:30 -- dd/basic_rw.sh@24 -- # count=15 00:06:53.288 08:57:30 -- dd/basic_rw.sh@25 -- # size=61440 00:06:53.288 08:57:30 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:53.288 08:57:30 -- dd/common.sh@98 -- # xtrace_disable 00:06:53.288 08:57:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.857 08:57:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:53.857 08:57:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.857 08:57:30 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.857 08:57:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.857 [2024-11-17 08:57:30.648017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.857 [2024-11-17 08:57:30.648123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57793 ] 00:06:53.857 { 00:06:53.857 "subsystems": [ 00:06:53.857 { 00:06:53.857 "subsystem": "bdev", 00:06:53.857 "config": [ 00:06:53.857 { 00:06:53.857 "params": { 00:06:53.857 "trtype": "pcie", 00:06:53.857 "traddr": "0000:00:06.0", 00:06:53.857 "name": "Nvme0" 00:06:53.857 }, 00:06:53.857 "method": "bdev_nvme_attach_controller" 00:06:53.857 }, 00:06:53.857 { 00:06:53.857 "method": "bdev_wait_for_examine" 00:06:53.857 } 00:06:53.857 ] 00:06:53.857 } 00:06:53.857 ] 00:06:53.857 } 00:06:53.857 [2024-11-17 08:57:30.778519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.116 [2024-11-17 08:57:30.829354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.117  [2024-11-17T08:57:31.306Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:54.376 00:06:54.376 08:57:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:54.376 08:57:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.376 08:57:31 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.376 08:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.376 [2024-11-17 08:57:31.173025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.376 [2024-11-17 08:57:31.173126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57811 ] 00:06:54.376 { 00:06:54.376 "subsystems": [ 00:06:54.376 { 00:06:54.376 "subsystem": "bdev", 00:06:54.376 "config": [ 00:06:54.376 { 00:06:54.376 "params": { 00:06:54.376 "trtype": "pcie", 00:06:54.376 "traddr": "0000:00:06.0", 00:06:54.376 "name": "Nvme0" 00:06:54.376 }, 00:06:54.376 "method": "bdev_nvme_attach_controller" 00:06:54.376 }, 00:06:54.376 { 00:06:54.376 "method": "bdev_wait_for_examine" 00:06:54.376 } 00:06:54.376 ] 00:06:54.376 } 00:06:54.376 ] 00:06:54.376 } 00:06:54.376 [2024-11-17 08:57:31.301381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.635 [2024-11-17 08:57:31.352494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.635  [2024-11-17T08:57:31.825Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:54.895 00:06:54.895 08:57:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.895 08:57:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:54.895 08:57:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.895 08:57:31 -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.895 08:57:31 -- dd/common.sh@12 -- # local size=61440 00:06:54.895 08:57:31 -- dd/common.sh@14 -- # local bs=1048576 00:06:54.895 08:57:31 -- dd/common.sh@15 -- # local count=1 00:06:54.895 08:57:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:54.895 08:57:31 -- dd/common.sh@18 -- # gen_conf 00:06:54.895 08:57:31 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.895 08:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.895 [2024-11-17 08:57:31.708518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.895 [2024-11-17 08:57:31.708640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57819 ] 00:06:54.895 { 00:06:54.895 "subsystems": [ 00:06:54.895 { 00:06:54.895 "subsystem": "bdev", 00:06:54.895 "config": [ 00:06:54.895 { 00:06:54.895 "params": { 00:06:54.895 "trtype": "pcie", 00:06:54.895 "traddr": "0000:00:06.0", 00:06:54.895 "name": "Nvme0" 00:06:54.895 }, 00:06:54.895 "method": "bdev_nvme_attach_controller" 00:06:54.895 }, 00:06:54.895 { 00:06:54.895 "method": "bdev_wait_for_examine" 00:06:54.895 } 00:06:54.895 ] 00:06:54.895 } 00:06:54.895 ] 00:06:54.895 } 00:06:55.154 [2024-11-17 08:57:31.845569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.154 [2024-11-17 08:57:31.893391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.154  [2024-11-17T08:57:32.343Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.413 00:06:55.413 08:57:32 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:55.413 08:57:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:55.413 08:57:32 -- dd/basic_rw.sh@23 -- # count=7 00:06:55.413 08:57:32 -- dd/basic_rw.sh@24 -- # count=7 00:06:55.413 08:57:32 -- dd/basic_rw.sh@25 -- # size=57344 00:06:55.413 08:57:32 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:55.413 08:57:32 -- dd/common.sh@98 -- # xtrace_disable 00:06:55.413 08:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 08:57:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:55.981 08:57:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:55.981 08:57:32 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.981 08:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 { 00:06:55.981 "subsystems": [ 00:06:55.981 { 00:06:55.981 "subsystem": "bdev", 00:06:55.981 "config": [ 00:06:55.981 { 00:06:55.981 "params": { 00:06:55.981 "trtype": "pcie", 00:06:55.981 "traddr": "0000:00:06.0", 00:06:55.981 "name": "Nvme0" 00:06:55.981 }, 00:06:55.982 "method": "bdev_nvme_attach_controller" 00:06:55.982 }, 00:06:55.982 { 00:06:55.982 "method": "bdev_wait_for_examine" 00:06:55.982 } 00:06:55.982 ] 00:06:55.982 } 00:06:55.982 ] 00:06:55.982 } 00:06:55.982 [2024-11-17 08:57:32.767192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.982 [2024-11-17 08:57:32.767297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:06:55.982 [2024-11-17 08:57:32.906139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.241 [2024-11-17 08:57:32.955490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.241  [2024-11-17T08:57:33.430Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:56.500 00:06:56.500 08:57:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.500 08:57:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:56.500 08:57:33 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.500 08:57:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.500 [2024-11-17 08:57:33.314045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.500 [2024-11-17 08:57:33.314145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57855 ] 00:06:56.500 { 00:06:56.500 "subsystems": [ 00:06:56.500 { 00:06:56.500 "subsystem": "bdev", 00:06:56.500 "config": [ 00:06:56.500 { 00:06:56.500 "params": { 00:06:56.500 "trtype": "pcie", 00:06:56.500 "traddr": "0000:00:06.0", 00:06:56.500 "name": "Nvme0" 00:06:56.500 }, 00:06:56.500 "method": "bdev_nvme_attach_controller" 00:06:56.500 }, 00:06:56.500 { 00:06:56.500 "method": "bdev_wait_for_examine" 00:06:56.500 } 00:06:56.500 ] 00:06:56.500 } 00:06:56.500 ] 00:06:56.500 } 00:06:56.759 [2024-11-17 08:57:33.450092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.759 [2024-11-17 08:57:33.498049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.759  [2024-11-17T08:57:33.948Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:57.018 00:06:57.018 08:57:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.018 08:57:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:57.018 08:57:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.018 08:57:33 -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.018 08:57:33 -- dd/common.sh@12 -- # local size=57344 00:06:57.018 08:57:33 -- dd/common.sh@14 -- # local bs=1048576 00:06:57.018 08:57:33 -- dd/common.sh@15 -- # local count=1 00:06:57.018 08:57:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.018 08:57:33 -- dd/common.sh@18 -- # gen_conf 00:06:57.018 08:57:33 -- dd/common.sh@31 -- # xtrace_disable 00:06:57.018 08:57:33 -- common/autotest_common.sh@10 -- # set +x 00:06:57.018 [2024-11-17 08:57:33.841479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.018 [2024-11-17 08:57:33.841628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57863 ] 00:06:57.018 { 00:06:57.018 "subsystems": [ 00:06:57.018 { 00:06:57.018 "subsystem": "bdev", 00:06:57.018 "config": [ 00:06:57.018 { 00:06:57.018 "params": { 00:06:57.018 "trtype": "pcie", 00:06:57.018 "traddr": "0000:00:06.0", 00:06:57.018 "name": "Nvme0" 00:06:57.018 }, 00:06:57.018 "method": "bdev_nvme_attach_controller" 00:06:57.018 }, 00:06:57.018 { 00:06:57.018 "method": "bdev_wait_for_examine" 00:06:57.018 } 00:06:57.018 ] 00:06:57.018 } 00:06:57.018 ] 00:06:57.018 } 00:06:57.278 [2024-11-17 08:57:33.978678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.278 [2024-11-17 08:57:34.027504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.278  [2024-11-17T08:57:34.467Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:57.537 00:06:57.537 08:57:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:57.537 08:57:34 -- dd/basic_rw.sh@23 -- # count=7 00:06:57.537 08:57:34 -- dd/basic_rw.sh@24 -- # count=7 00:06:57.537 08:57:34 -- dd/basic_rw.sh@25 -- # size=57344 00:06:57.537 08:57:34 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:57.537 08:57:34 -- dd/common.sh@98 -- # xtrace_disable 00:06:57.537 08:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:58.105 08:57:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:58.105 08:57:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:58.105 08:57:34 -- dd/common.sh@31 -- # xtrace_disable 00:06:58.105 08:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:58.105 [2024-11-17 08:57:34.872398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.105 [2024-11-17 08:57:34.872502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57881 ] 00:06:58.105 { 00:06:58.105 "subsystems": [ 00:06:58.105 { 00:06:58.105 "subsystem": "bdev", 00:06:58.105 "config": [ 00:06:58.105 { 00:06:58.105 "params": { 00:06:58.105 "trtype": "pcie", 00:06:58.105 "traddr": "0000:00:06.0", 00:06:58.105 "name": "Nvme0" 00:06:58.105 }, 00:06:58.105 "method": "bdev_nvme_attach_controller" 00:06:58.105 }, 00:06:58.105 { 00:06:58.105 "method": "bdev_wait_for_examine" 00:06:58.105 } 00:06:58.105 ] 00:06:58.105 } 00:06:58.105 ] 00:06:58.105 } 00:06:58.105 [2024-11-17 08:57:35.009836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.364 [2024-11-17 08:57:35.059892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.364  [2024-11-17T08:57:35.553Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:58.623 00:06:58.623 08:57:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:58.623 08:57:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:58.623 08:57:35 -- dd/common.sh@31 -- # xtrace_disable 00:06:58.623 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.623 [2024-11-17 08:57:35.418187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.623 [2024-11-17 08:57:35.418336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57899 ] 00:06:58.623 { 00:06:58.623 "subsystems": [ 00:06:58.623 { 00:06:58.623 "subsystem": "bdev", 00:06:58.623 "config": [ 00:06:58.623 { 00:06:58.623 "params": { 00:06:58.623 "trtype": "pcie", 00:06:58.623 "traddr": "0000:00:06.0", 00:06:58.623 "name": "Nvme0" 00:06:58.623 }, 00:06:58.623 "method": "bdev_nvme_attach_controller" 00:06:58.623 }, 00:06:58.623 { 00:06:58.623 "method": "bdev_wait_for_examine" 00:06:58.623 } 00:06:58.623 ] 00:06:58.623 } 00:06:58.623 ] 00:06:58.623 } 00:06:58.882 [2024-11-17 08:57:35.557192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.882 [2024-11-17 08:57:35.605559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.882  [2024-11-17T08:57:36.072Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:59.142 00:06:59.142 08:57:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.142 08:57:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:59.142 08:57:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.142 08:57:35 -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.142 08:57:35 -- dd/common.sh@12 -- # local size=57344 00:06:59.142 08:57:35 -- dd/common.sh@14 -- # local bs=1048576 00:06:59.142 08:57:35 -- dd/common.sh@15 -- # local count=1 00:06:59.142 08:57:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.142 08:57:35 -- dd/common.sh@18 -- # gen_conf 00:06:59.142 08:57:35 -- dd/common.sh@31 -- # xtrace_disable 00:06:59.142 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:59.142 [2024-11-17 08:57:35.944530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.142 [2024-11-17 08:57:35.944675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57907 ] 00:06:59.142 { 00:06:59.142 "subsystems": [ 00:06:59.142 { 00:06:59.142 "subsystem": "bdev", 00:06:59.142 "config": [ 00:06:59.142 { 00:06:59.142 "params": { 00:06:59.142 "trtype": "pcie", 00:06:59.142 "traddr": "0000:00:06.0", 00:06:59.142 "name": "Nvme0" 00:06:59.142 }, 00:06:59.142 "method": "bdev_nvme_attach_controller" 00:06:59.142 }, 00:06:59.142 { 00:06:59.142 "method": "bdev_wait_for_examine" 00:06:59.142 } 00:06:59.142 ] 00:06:59.142 } 00:06:59.142 ] 00:06:59.142 } 00:06:59.401 [2024-11-17 08:57:36.075512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.402 [2024-11-17 08:57:36.124524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.402  [2024-11-17T08:57:36.590Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:59.660 00:06:59.660 08:57:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:59.660 08:57:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:59.660 08:57:36 -- dd/basic_rw.sh@23 -- # count=3 00:06:59.660 08:57:36 -- dd/basic_rw.sh@24 -- # count=3 00:06:59.660 08:57:36 -- dd/basic_rw.sh@25 -- # size=49152 00:06:59.660 08:57:36 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:59.660 08:57:36 -- dd/common.sh@98 -- # xtrace_disable 00:06:59.660 08:57:36 -- common/autotest_common.sh@10 -- # set +x 00:07:00.229 08:57:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:00.230 08:57:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:00.230 08:57:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:00.230 08:57:36 -- common/autotest_common.sh@10 -- # set +x 00:07:00.230 { 00:07:00.230 "subsystems": [ 00:07:00.230 { 00:07:00.230 "subsystem": "bdev", 00:07:00.230 "config": [ 00:07:00.230 { 00:07:00.230 "params": { 00:07:00.230 "trtype": "pcie", 00:07:00.230 "traddr": "0000:00:06.0", 00:07:00.230 "name": "Nvme0" 00:07:00.230 }, 00:07:00.230 "method": "bdev_nvme_attach_controller" 00:07:00.230 }, 00:07:00.230 { 00:07:00.230 "method": "bdev_wait_for_examine" 00:07:00.230 } 00:07:00.230 ] 00:07:00.230 } 00:07:00.230 ] 00:07:00.230 } 00:07:00.230 [2024-11-17 08:57:36.919084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.230 [2024-11-17 08:57:36.919181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:07:00.230 [2024-11-17 08:57:37.058705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.230 [2024-11-17 08:57:37.107296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.489  [2024-11-17T08:57:37.419Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:00.489 00:07:00.489 08:57:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:00.489 08:57:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:00.489 08:57:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:00.489 08:57:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.748 [2024-11-17 08:57:37.463506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.748 [2024-11-17 08:57:37.463620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:07:00.748 { 00:07:00.748 "subsystems": [ 00:07:00.748 { 00:07:00.748 "subsystem": "bdev", 00:07:00.748 "config": [ 00:07:00.748 { 00:07:00.748 "params": { 00:07:00.748 "trtype": "pcie", 00:07:00.748 "traddr": "0000:00:06.0", 00:07:00.748 "name": "Nvme0" 00:07:00.748 }, 00:07:00.748 "method": "bdev_nvme_attach_controller" 00:07:00.748 }, 00:07:00.748 { 00:07:00.748 "method": "bdev_wait_for_examine" 00:07:00.748 } 00:07:00.748 ] 00:07:00.748 } 00:07:00.748 ] 00:07:00.748 } 00:07:00.748 [2024-11-17 08:57:37.604236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.748 [2024-11-17 08:57:37.672128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.008  [2024-11-17T08:57:38.197Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:01.267 00:07:01.267 08:57:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.267 08:57:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:01.267 08:57:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:01.267 08:57:37 -- dd/common.sh@11 -- # local nvme_ref= 00:07:01.267 08:57:37 -- dd/common.sh@12 -- # local size=49152 00:07:01.267 08:57:37 -- dd/common.sh@14 -- # local bs=1048576 00:07:01.267 08:57:37 -- dd/common.sh@15 -- # local count=1 00:07:01.267 08:57:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:01.267 08:57:37 -- dd/common.sh@18 -- # gen_conf 00:07:01.267 08:57:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:01.267 08:57:37 -- common/autotest_common.sh@10 -- # set +x 00:07:01.267 [2024-11-17 08:57:38.010330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.267 [2024-11-17 08:57:38.010429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:07:01.267 { 00:07:01.267 "subsystems": [ 00:07:01.267 { 00:07:01.267 "subsystem": "bdev", 00:07:01.267 "config": [ 00:07:01.267 { 00:07:01.267 "params": { 00:07:01.267 "trtype": "pcie", 00:07:01.267 "traddr": "0000:00:06.0", 00:07:01.267 "name": "Nvme0" 00:07:01.267 }, 00:07:01.267 "method": "bdev_nvme_attach_controller" 00:07:01.267 }, 00:07:01.267 { 00:07:01.267 "method": "bdev_wait_for_examine" 00:07:01.267 } 00:07:01.267 ] 00:07:01.267 } 00:07:01.267 ] 00:07:01.267 } 00:07:01.267 [2024-11-17 08:57:38.142583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.267 [2024-11-17 08:57:38.189209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.527  [2024-11-17T08:57:38.716Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:01.786 00:07:01.786 08:57:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:01.786 08:57:38 -- dd/basic_rw.sh@23 -- # count=3 00:07:01.786 08:57:38 -- dd/basic_rw.sh@24 -- # count=3 00:07:01.786 08:57:38 -- dd/basic_rw.sh@25 -- # size=49152 00:07:01.786 08:57:38 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:01.786 08:57:38 -- dd/common.sh@98 -- # xtrace_disable 00:07:01.786 08:57:38 -- common/autotest_common.sh@10 -- # set +x 00:07:02.045 08:57:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:02.045 08:57:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:02.045 08:57:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:02.045 08:57:38 -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 { 00:07:02.304 "subsystems": [ 00:07:02.304 { 00:07:02.304 "subsystem": "bdev", 00:07:02.304 "config": [ 00:07:02.304 { 00:07:02.304 "params": { 00:07:02.304 "trtype": "pcie", 00:07:02.304 "traddr": "0000:00:06.0", 00:07:02.304 "name": "Nvme0" 00:07:02.304 }, 00:07:02.304 "method": "bdev_nvme_attach_controller" 00:07:02.304 }, 00:07:02.304 { 00:07:02.304 "method": "bdev_wait_for_examine" 00:07:02.304 } 00:07:02.304 ] 00:07:02.304 } 00:07:02.304 ] 00:07:02.304 } 00:07:02.304 [2024-11-17 08:57:38.983345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.304 [2024-11-17 08:57:38.983447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57970 ] 00:07:02.304 [2024-11-17 08:57:39.123375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.304 [2024-11-17 08:57:39.171230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.563  [2024-11-17T08:57:39.493Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:02.563 00:07:02.563 08:57:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:02.563 08:57:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:02.563 08:57:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:02.563 08:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.822 [2024-11-17 08:57:39.505958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.822 [2024-11-17 08:57:39.506063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:07:02.822 { 00:07:02.822 "subsystems": [ 00:07:02.822 { 00:07:02.822 "subsystem": "bdev", 00:07:02.822 "config": [ 00:07:02.822 { 00:07:02.822 "params": { 00:07:02.822 "trtype": "pcie", 00:07:02.822 "traddr": "0000:00:06.0", 00:07:02.822 "name": "Nvme0" 00:07:02.822 }, 00:07:02.822 "method": "bdev_nvme_attach_controller" 00:07:02.822 }, 00:07:02.822 { 00:07:02.822 "method": "bdev_wait_for_examine" 00:07:02.822 } 00:07:02.822 ] 00:07:02.822 } 00:07:02.822 ] 00:07:02.822 } 00:07:02.822 [2024-11-17 08:57:39.643454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.822 [2024-11-17 08:57:39.699973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.081  [2024-11-17T08:57:40.011Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:03.081 00:07:03.081 08:57:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.341 08:57:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:03.341 08:57:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:03.341 08:57:40 -- dd/common.sh@11 -- # local nvme_ref= 00:07:03.341 08:57:40 -- dd/common.sh@12 -- # local size=49152 00:07:03.341 08:57:40 -- dd/common.sh@14 -- # local bs=1048576 00:07:03.341 08:57:40 -- dd/common.sh@15 -- # local count=1 00:07:03.341 08:57:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:03.341 08:57:40 -- dd/common.sh@18 -- # gen_conf 00:07:03.341 08:57:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:03.341 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 [2024-11-17 08:57:40.062552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.341 [2024-11-17 08:57:40.062711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:07:03.341 { 00:07:03.341 "subsystems": [ 00:07:03.341 { 00:07:03.341 "subsystem": "bdev", 00:07:03.341 "config": [ 00:07:03.341 { 00:07:03.341 "params": { 00:07:03.341 "trtype": "pcie", 00:07:03.341 "traddr": "0000:00:06.0", 00:07:03.341 "name": "Nvme0" 00:07:03.341 }, 00:07:03.341 "method": "bdev_nvme_attach_controller" 00:07:03.341 }, 00:07:03.341 { 00:07:03.341 "method": "bdev_wait_for_examine" 00:07:03.341 } 00:07:03.341 ] 00:07:03.341 } 00:07:03.341 ] 00:07:03.341 } 00:07:03.341 [2024-11-17 08:57:40.201892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.341 [2024-11-17 08:57:40.250811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.600  [2024-11-17T08:57:40.790Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.860 00:07:03.860 00:07:03.860 real 0m12.626s 00:07:03.860 user 0m9.359s 00:07:03.860 sys 0m2.047s 00:07:03.860 08:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.860 ************************************ 00:07:03.860 END TEST dd_rw 00:07:03.860 ************************************ 00:07:03.860 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 08:57:40 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:03.860 08:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.860 08:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.860 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 ************************************ 00:07:03.860 START TEST dd_rw_offset 00:07:03.860 ************************************ 00:07:03.860 08:57:40 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:03.860 08:57:40 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:03.860 08:57:40 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:03.860 08:57:40 -- dd/common.sh@98 -- # xtrace_disable 00:07:03.860 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 08:57:40 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:03.860 08:57:40 -- dd/basic_rw.sh@56 -- # data=f0f5w5p4l4x7yhdm8wv2jyh578k90h6weuybm6unxn8m6rebci2ydcqz361f1hy41xz888n5xpwnmrdzwno9sg51v1wqku3iqhzvzld8ritpu498fqmputdwra9b4jywgkf0hkea966fi3l6jj8lymwuzcgkxf62yf31s202mitau1p9vob53gnpuo21gh7cqbbpwzovqix5a3rogqb5qfelnqxxokj82wregk2lwwn7srrvf1bk3imiuwyy5mfs8uykjuwzi7iu8xnryggvhb3pgvz6bixesorf1ieny6mtp56h8xpq6a13i3xg8o7d2pi0mqtn03h5iqvmjpvgkb14pqwr46rjknlwofvsxj21jqjqu0k8md6jrbr6dmlrdnadvky23ovmzr3qwuhhz95rjc896ymwh3faeby7bdtho860k2st10euw2nk01fa5bpf11vvzqfpnwh6vr7lcv35404hnvmjao0u3cgt1ebbsxzhftz80p96mqq63ncph9fln6xrcrucjtds1mmnob0jl523ozugc22pio6yauqm6g40d2fgnz6eat4rlc7r8hrvbxr5n4ittzqxz3wbiveg8urzu5bnpp12gv2hehcsk4vdy15sl45yfvoij2eov6myk5cyk53za21r6ibzsozhyx35opwne267egbgsudw3yxnga6l8w49fgekvntb76a1bkqgnwgampkqtdyrpqocknla02ydykazdpgj4xjuqmkqixze498eyluohb78ysl46f23yo0rvivrzcsql85eu3woilivfj3aany4vxan5pf4rqtzf3rlo3td75w5anf2mj8r6atrlmtbli9q91t10ghna7kfchndp8awpm64utd8mxytd2ilrucqc31j9s7frqdcvc62v6ta51zs2szrhaybc2mkfom22romwii6uj3xugp6i1vvrotwrkqq92u0en9wdvnjcic6i3r5wemywms7aawr5wjg5lx3w8xzhx81hsxk1dmep6vibizbfk7bobencmmddvyv9e5bs6l8uqc52nnjgy3e3u0dwqh6wn69glb5gjfz0n9jpl0sr0aj8zp7mssv9wamrpr7dhy450o4son7wj7mdu07icw1k31tgrdubdg0lfgbmh53mcpbk62c0w6ge8wlch7btwl0f8wr1e0sjfpqfkq3sfnio10jkjsebf26i4a4kj3tk6ufhxf4h8btomfjb32v6b5qucj7qjln2rv4vj8ga8yy1n2os6rfptbbue3edl7ntffflr33qn1khzyaoy5qnn312nhpv1g46mfg7ksb7n1oewhooipi9wxjm0iepkwtwbfqp0iumeopo5bq2t14hxg4ttj04b5z6tnzepvnqyqwy0gcfvps91fugd9d2lvee1nkrvf8nx17owgyygnv7hre56sfiqyku997a9xfovy5x79tuu217hqaadkn0dgz36ejr3c3wg11pv63mq5crogb8qhgtx7ga963199fiiezzplny3uv3oil3iy1az1mw3uwr1s3pgg0heq6pti829scxio7ajvaskyd4vcb0m4zfig40dd64bcprnr5mhl1ccq1iehr5julr4dsy8ykjfxd2vtyeitziwuhhp21mw05gspoo8ir5qeu82xq3p371hb0vf51tmuywftsbh31k8ni02krp2umkd2n15e17jwj2s1c3jj2wvnzzajijr1k70tzn768pc9cvuudz0o2vmg34dja6dixg3gbx7u6ajghsigkbihotacspfupbyxxmikbakjtr8oo9szoi12p11yasscsuohhckb7o3qla2hld3d3w9h1h0flwcggjpsloo6yqzn43p9g9uaxonp31w41kwc05ytkbtvjornqr1xrctixz3e3wmiwiknniosedquxsft3z931t4uq5p5y25mw6icluw6dk7utj0hsrjd8637us64a13hl590axieucirqksaudg0srj62gzzbuprr2u1a5gb5p9t6v3is3mwbjbfe8nooqi0iizyje0n6h3ebrqfp4syufnggwgzpfdsp8dqnix5lk4vxfvplk31xc1e61oa1zzasvnfd1skdcquv5zyzmy1uk9p3kuu875o8yejfp9siqfmsiacr293w97lzvvqemr8t03cl1c59kf9nzykj4egvt1k839xsafdrs31a0zdun7088jwvzvz89l1ttxqjf1tl4ua91di7xuj051kioyu22wa8pnw1ab03ohot8lqlno573wr806nzsqo5tm4jr7ndonf1d12pl7nilob1l0ovjefh2csi3pxm9t03gkudwdvy33dnoflidf8hfc0jlx1nk7xouhrcmo3qzt6vjvpaw4j2b80ec3m7lnrhtr800ixesqhnfexeebjcrlcd7nm61z7s8ijna3vt6mr8k0jref9vhaju2sgvie56fm0a3adn25lgauiqjfzzuot3c50aokpuv7cg0rwugzbfpro60v537mr8mjg8fhzm86d1pk4m7axqinp8ffch7myqx6cxs4jdqodnct4jr8igz9rkmo82fh2y32tag6ufdecocwti8qu819uu39f9z670fggtnzi5edyyslnp792dgcrbo2jcahcas138r7kahyf3nynawq1lu37dia5rqvpy3cugz9rzb8an4eikdoal1b7742tllh36zoic35sbh5juw80gar3g7fyvh78fffgmm7csia48ozn87bnkftwmpkxq0ku87aevge44wjcwgiyhdfgtj7lwiics45wv9xpqbr4bz30fuxp9dn7r50fldxc21ndgq7u87ze2lz82915bbcds9opl54grf1dn8s7dnow7nbxzliwoulz7uxyzj7p76c9h67fzoizzmluqn4cu9nxwa80a1xyzydhxvyoicebqsmxbrgxtyc1f1ukyvsv6zhtlhzpwrpdcmrss7fm1fgtb0xtxbxmanl8pnxoa5x887tb7vh4vrpboghs5m4giagzyrret729jhsm7a6cibl7hb244tqvp9hzp9udargzoy599oxw9dhvwnyii6dj9ufhxad9hy6z7f3is3jbykamxfcros1vk5brr4tirhpebcomiwgsky6oe6cpjfxrgj9bkcy3nb1gtc2bs6xhwz9z1bz6yapr94eywbtpp9k7hwmqm8fjfk7fzzl9ztez6tn82kp7tygvbf467ap4lsan7cl2ovf3oltnmjzrirg9c2lgx2yc9dvg5xx629j0yglf67puwblmgakeiog21ifantqsjgzzzabchf0fs7m23lpzrzo3uqm249619ovifeip5llvjhj6kx25mywqyo90eu5mt9kqsha47bdokbfzdqpfv1kddk6cty3rdy35qh94z3phmegnz77v3gvfcwlambn9h1wqg8wiegoer5xghsg22apepumaxydw1e53sj5j0v603ynq5fli5nhou3beeq10azunhgf9190bcg26n0evosrir8uwilmz79umk9hj01mg6029smpp4gq7xm7vu7rlqbcrw0189zrkvhcblmmb37zl7xb55vs5qo4mmw1424prvfb9iuo004l8kbgcqfz9fobp0i7w2hv0w80laa4oq8tli0vkwvyk2c5j5z5dl17chdnszklmx5g2krg6ue13u39m6jv9dz0xycbaifsow54fgl7cydcty0iv3eyue899tamhgh0nhbrtn5idfclfn5e7otlvdp1k13lsqagsg4z6vw12uawsbtagwqciephn1vilwe9uvw0hej91jcwjsljlh9vecy3qw0if853vfn7heah7tablbk7ym0fxht6zupa0qh9mj7epomdaibeuo55muptkdrpo3ia8nzgixzo0y7g0yjybjgl5220nycjkjqs9w3uczkw610i8u0jycer1gvaiwvud3drj2ltn8vn6og62h33hlqqxfdq85ry4zns257tyxselt89sy2ihr6urllqcr2bzct7581mjxi9au7138sybyx8jjq4nfshd9b8l20k0rtavhz9udbxeshfkhru8ikv4uksdcylse6dxl 00:07:03.860 08:57:40 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:03.860 08:57:40 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:03.860 08:57:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:03.860 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 [2024-11-17 08:57:40.724455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.860 [2024-11-17 08:57:40.725241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58031 ] 00:07:03.860 { 00:07:03.860 "subsystems": [ 00:07:03.860 { 00:07:03.860 "subsystem": "bdev", 00:07:03.860 "config": [ 00:07:03.860 { 00:07:03.860 "params": { 00:07:03.860 "trtype": "pcie", 00:07:03.860 "traddr": "0000:00:06.0", 00:07:03.860 "name": "Nvme0" 00:07:03.860 }, 00:07:03.860 "method": "bdev_nvme_attach_controller" 00:07:03.860 }, 00:07:03.860 { 00:07:03.860 "method": "bdev_wait_for_examine" 00:07:03.860 } 00:07:03.860 ] 00:07:03.860 } 00:07:03.860 ] 00:07:03.860 } 00:07:04.119 [2024-11-17 08:57:40.862704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.119 [2024-11-17 08:57:40.911979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.119  [2024-11-17T08:57:41.308Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.378 00:07:04.378 08:57:41 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:04.378 08:57:41 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:04.378 08:57:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:04.378 08:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.378 [2024-11-17 08:57:41.247350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.378 [2024-11-17 08:57:41.247457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58038 ] 00:07:04.378 { 00:07:04.378 "subsystems": [ 00:07:04.378 { 00:07:04.378 "subsystem": "bdev", 00:07:04.378 "config": [ 00:07:04.378 { 00:07:04.378 "params": { 00:07:04.378 "trtype": "pcie", 00:07:04.378 "traddr": "0000:00:06.0", 00:07:04.378 "name": "Nvme0" 00:07:04.378 }, 00:07:04.378 "method": "bdev_nvme_attach_controller" 00:07:04.378 }, 00:07:04.378 { 00:07:04.378 "method": "bdev_wait_for_examine" 00:07:04.378 } 00:07:04.378 ] 00:07:04.378 } 00:07:04.378 ] 00:07:04.378 } 00:07:04.638 [2024-11-17 08:57:41.386272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.638 [2024-11-17 08:57:41.433375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.638  [2024-11-17T08:57:41.828Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.898 00:07:04.898 08:57:41 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:04.899 08:57:41 -- dd/basic_rw.sh@72 -- # [[ f0f5w5p4l4x7yhdm8wv2jyh578k90h6weuybm6unxn8m6rebci2ydcqz361f1hy41xz888n5xpwnmrdzwno9sg51v1wqku3iqhzvzld8ritpu498fqmputdwra9b4jywgkf0hkea966fi3l6jj8lymwuzcgkxf62yf31s202mitau1p9vob53gnpuo21gh7cqbbpwzovqix5a3rogqb5qfelnqxxokj82wregk2lwwn7srrvf1bk3imiuwyy5mfs8uykjuwzi7iu8xnryggvhb3pgvz6bixesorf1ieny6mtp56h8xpq6a13i3xg8o7d2pi0mqtn03h5iqvmjpvgkb14pqwr46rjknlwofvsxj21jqjqu0k8md6jrbr6dmlrdnadvky23ovmzr3qwuhhz95rjc896ymwh3faeby7bdtho860k2st10euw2nk01fa5bpf11vvzqfpnwh6vr7lcv35404hnvmjao0u3cgt1ebbsxzhftz80p96mqq63ncph9fln6xrcrucjtds1mmnob0jl523ozugc22pio6yauqm6g40d2fgnz6eat4rlc7r8hrvbxr5n4ittzqxz3wbiveg8urzu5bnpp12gv2hehcsk4vdy15sl45yfvoij2eov6myk5cyk53za21r6ibzsozhyx35opwne267egbgsudw3yxnga6l8w49fgekvntb76a1bkqgnwgampkqtdyrpqocknla02ydykazdpgj4xjuqmkqixze498eyluohb78ysl46f23yo0rvivrzcsql85eu3woilivfj3aany4vxan5pf4rqtzf3rlo3td75w5anf2mj8r6atrlmtbli9q91t10ghna7kfchndp8awpm64utd8mxytd2ilrucqc31j9s7frqdcvc62v6ta51zs2szrhaybc2mkfom22romwii6uj3xugp6i1vvrotwrkqq92u0en9wdvnjcic6i3r5wemywms7aawr5wjg5lx3w8xzhx81hsxk1dmep6vibizbfk7bobencmmddvyv9e5bs6l8uqc52nnjgy3e3u0dwqh6wn69glb5gjfz0n9jpl0sr0aj8zp7mssv9wamrpr7dhy450o4son7wj7mdu07icw1k31tgrdubdg0lfgbmh53mcpbk62c0w6ge8wlch7btwl0f8wr1e0sjfpqfkq3sfnio10jkjsebf26i4a4kj3tk6ufhxf4h8btomfjb32v6b5qucj7qjln2rv4vj8ga8yy1n2os6rfptbbue3edl7ntffflr33qn1khzyaoy5qnn312nhpv1g46mfg7ksb7n1oewhooipi9wxjm0iepkwtwbfqp0iumeopo5bq2t14hxg4ttj04b5z6tnzepvnqyqwy0gcfvps91fugd9d2lvee1nkrvf8nx17owgyygnv7hre56sfiqyku997a9xfovy5x79tuu217hqaadkn0dgz36ejr3c3wg11pv63mq5crogb8qhgtx7ga963199fiiezzplny3uv3oil3iy1az1mw3uwr1s3pgg0heq6pti829scxio7ajvaskyd4vcb0m4zfig40dd64bcprnr5mhl1ccq1iehr5julr4dsy8ykjfxd2vtyeitziwuhhp21mw05gspoo8ir5qeu82xq3p371hb0vf51tmuywftsbh31k8ni02krp2umkd2n15e17jwj2s1c3jj2wvnzzajijr1k70tzn768pc9cvuudz0o2vmg34dja6dixg3gbx7u6ajghsigkbihotacspfupbyxxmikbakjtr8oo9szoi12p11yasscsuohhckb7o3qla2hld3d3w9h1h0flwcggjpsloo6yqzn43p9g9uaxonp31w41kwc05ytkbtvjornqr1xrctixz3e3wmiwiknniosedquxsft3z931t4uq5p5y25mw6icluw6dk7utj0hsrjd8637us64a13hl590axieucirqksaudg0srj62gzzbuprr2u1a5gb5p9t6v3is3mwbjbfe8nooqi0iizyje0n6h3ebrqfp4syufnggwgzpfdsp8dqnix5lk4vxfvplk31xc1e61oa1zzasvnfd1skdcquv5zyzmy1uk9p3kuu875o8yejfp9siqfmsiacr293w97lzvvqemr8t03cl1c59kf9nzykj4egvt1k839xsafdrs31a0zdun7088jwvzvz89l1ttxqjf1tl4ua91di7xuj051kioyu22wa8pnw1ab03ohot8lqlno573wr806nzsqo5tm4jr7ndonf1d12pl7nilob1l0ovjefh2csi3pxm9t03gkudwdvy33dnoflidf8hfc0jlx1nk7xouhrcmo3qzt6vjvpaw4j2b80ec3m7lnrhtr800ixesqhnfexeebjcrlcd7nm61z7s8ijna3vt6mr8k0jref9vhaju2sgvie56fm0a3adn25lgauiqjfzzuot3c50aokpuv7cg0rwugzbfpro60v537mr8mjg8fhzm86d1pk4m7axqinp8ffch7myqx6cxs4jdqodnct4jr8igz9rkmo82fh2y32tag6ufdecocwti8qu819uu39f9z670fggtnzi5edyyslnp792dgcrbo2jcahcas138r7kahyf3nynawq1lu37dia5rqvpy3cugz9rzb8an4eikdoal1b7742tllh36zoic35sbh5juw80gar3g7fyvh78fffgmm7csia48ozn87bnkftwmpkxq0ku87aevge44wjcwgiyhdfgtj7lwiics45wv9xpqbr4bz30fuxp9dn7r50fldxc21ndgq7u87ze2lz82915bbcds9opl54grf1dn8s7dnow7nbxzliwoulz7uxyzj7p76c9h67fzoizzmluqn4cu9nxwa80a1xyzydhxvyoicebqsmxbrgxtyc1f1ukyvsv6zhtlhzpwrpdcmrss7fm1fgtb0xtxbxmanl8pnxoa5x887tb7vh4vrpboghs5m4giagzyrret729jhsm7a6cibl7hb244tqvp9hzp9udargzoy599oxw9dhvwnyii6dj9ufhxad9hy6z7f3is3jbykamxfcros1vk5brr4tirhpebcomiwgsky6oe6cpjfxrgj9bkcy3nb1gtc2bs6xhwz9z1bz6yapr94eywbtpp9k7hwmqm8fjfk7fzzl9ztez6tn82kp7tygvbf467ap4lsan7cl2ovf3oltnmjzrirg9c2lgx2yc9dvg5xx629j0yglf67puwblmgakeiog21ifantqsjgzzzabchf0fs7m23lpzrzo3uqm249619ovifeip5llvjhj6kx25mywqyo90eu5mt9kqsha47bdokbfzdqpfv1kddk6cty3rdy35qh94z3phmegnz77v3gvfcwlambn9h1wqg8wiegoer5xghsg22apepumaxydw1e53sj5j0v603ynq5fli5nhou3beeq10azunhgf9190bcg26n0evosrir8uwilmz79umk9hj01mg6029smpp4gq7xm7vu7rlqbcrw0189zrkvhcblmmb37zl7xb55vs5qo4mmw1424prvfb9iuo004l8kbgcqfz9fobp0i7w2hv0w80laa4oq8tli0vkwvyk2c5j5z5dl17chdnszklmx5g2krg6ue13u39m6jv9dz0xycbaifsow54fgl7cydcty0iv3eyue899tamhgh0nhbrtn5idfclfn5e7otlvdp1k13lsqagsg4z6vw12uawsbtagwqciephn1vilwe9uvw0hej91jcwjsljlh9vecy3qw0if853vfn7heah7tablbk7ym0fxht6zupa0qh9mj7epomdaibeuo55muptkdrpo3ia8nzgixzo0y7g0yjybjgl5220nycjkjqs9w3uczkw610i8u0jycer1gvaiwvud3drj2ltn8vn6og62h33hlqqxfdq85ry4zns257tyxselt89sy2ihr6urllqcr2bzct7581mjxi9au7138sybyx8jjq4nfshd9b8l20k0rtavhz9udbxeshfkhru8ikv4uksdcylse6dxl == \f\0\f\5\w\5\p\4\l\4\x\7\y\h\d\m\8\w\v\2\j\y\h\5\7\8\k\9\0\h\6\w\e\u\y\b\m\6\u\n\x\n\8\m\6\r\e\b\c\i\2\y\d\c\q\z\3\6\1\f\1\h\y\4\1\x\z\8\8\8\n\5\x\p\w\n\m\r\d\z\w\n\o\9\s\g\5\1\v\1\w\q\k\u\3\i\q\h\z\v\z\l\d\8\r\i\t\p\u\4\9\8\f\q\m\p\u\t\d\w\r\a\9\b\4\j\y\w\g\k\f\0\h\k\e\a\9\6\6\f\i\3\l\6\j\j\8\l\y\m\w\u\z\c\g\k\x\f\6\2\y\f\3\1\s\2\0\2\m\i\t\a\u\1\p\9\v\o\b\5\3\g\n\p\u\o\2\1\g\h\7\c\q\b\b\p\w\z\o\v\q\i\x\5\a\3\r\o\g\q\b\5\q\f\e\l\n\q\x\x\o\k\j\8\2\w\r\e\g\k\2\l\w\w\n\7\s\r\r\v\f\1\b\k\3\i\m\i\u\w\y\y\5\m\f\s\8\u\y\k\j\u\w\z\i\7\i\u\8\x\n\r\y\g\g\v\h\b\3\p\g\v\z\6\b\i\x\e\s\o\r\f\1\i\e\n\y\6\m\t\p\5\6\h\8\x\p\q\6\a\1\3\i\3\x\g\8\o\7\d\2\p\i\0\m\q\t\n\0\3\h\5\i\q\v\m\j\p\v\g\k\b\1\4\p\q\w\r\4\6\r\j\k\n\l\w\o\f\v\s\x\j\2\1\j\q\j\q\u\0\k\8\m\d\6\j\r\b\r\6\d\m\l\r\d\n\a\d\v\k\y\2\3\o\v\m\z\r\3\q\w\u\h\h\z\9\5\r\j\c\8\9\6\y\m\w\h\3\f\a\e\b\y\7\b\d\t\h\o\8\6\0\k\2\s\t\1\0\e\u\w\2\n\k\0\1\f\a\5\b\p\f\1\1\v\v\z\q\f\p\n\w\h\6\v\r\7\l\c\v\3\5\4\0\4\h\n\v\m\j\a\o\0\u\3\c\g\t\1\e\b\b\s\x\z\h\f\t\z\8\0\p\9\6\m\q\q\6\3\n\c\p\h\9\f\l\n\6\x\r\c\r\u\c\j\t\d\s\1\m\m\n\o\b\0\j\l\5\2\3\o\z\u\g\c\2\2\p\i\o\6\y\a\u\q\m\6\g\4\0\d\2\f\g\n\z\6\e\a\t\4\r\l\c\7\r\8\h\r\v\b\x\r\5\n\4\i\t\t\z\q\x\z\3\w\b\i\v\e\g\8\u\r\z\u\5\b\n\p\p\1\2\g\v\2\h\e\h\c\s\k\4\v\d\y\1\5\s\l\4\5\y\f\v\o\i\j\2\e\o\v\6\m\y\k\5\c\y\k\5\3\z\a\2\1\r\6\i\b\z\s\o\z\h\y\x\3\5\o\p\w\n\e\2\6\7\e\g\b\g\s\u\d\w\3\y\x\n\g\a\6\l\8\w\4\9\f\g\e\k\v\n\t\b\7\6\a\1\b\k\q\g\n\w\g\a\m\p\k\q\t\d\y\r\p\q\o\c\k\n\l\a\0\2\y\d\y\k\a\z\d\p\g\j\4\x\j\u\q\m\k\q\i\x\z\e\4\9\8\e\y\l\u\o\h\b\7\8\y\s\l\4\6\f\2\3\y\o\0\r\v\i\v\r\z\c\s\q\l\8\5\e\u\3\w\o\i\l\i\v\f\j\3\a\a\n\y\4\v\x\a\n\5\p\f\4\r\q\t\z\f\3\r\l\o\3\t\d\7\5\w\5\a\n\f\2\m\j\8\r\6\a\t\r\l\m\t\b\l\i\9\q\9\1\t\1\0\g\h\n\a\7\k\f\c\h\n\d\p\8\a\w\p\m\6\4\u\t\d\8\m\x\y\t\d\2\i\l\r\u\c\q\c\3\1\j\9\s\7\f\r\q\d\c\v\c\6\2\v\6\t\a\5\1\z\s\2\s\z\r\h\a\y\b\c\2\m\k\f\o\m\2\2\r\o\m\w\i\i\6\u\j\3\x\u\g\p\6\i\1\v\v\r\o\t\w\r\k\q\q\9\2\u\0\e\n\9\w\d\v\n\j\c\i\c\6\i\3\r\5\w\e\m\y\w\m\s\7\a\a\w\r\5\w\j\g\5\l\x\3\w\8\x\z\h\x\8\1\h\s\x\k\1\d\m\e\p\6\v\i\b\i\z\b\f\k\7\b\o\b\e\n\c\m\m\d\d\v\y\v\9\e\5\b\s\6\l\8\u\q\c\5\2\n\n\j\g\y\3\e\3\u\0\d\w\q\h\6\w\n\6\9\g\l\b\5\g\j\f\z\0\n\9\j\p\l\0\s\r\0\a\j\8\z\p\7\m\s\s\v\9\w\a\m\r\p\r\7\d\h\y\4\5\0\o\4\s\o\n\7\w\j\7\m\d\u\0\7\i\c\w\1\k\3\1\t\g\r\d\u\b\d\g\0\l\f\g\b\m\h\5\3\m\c\p\b\k\6\2\c\0\w\6\g\e\8\w\l\c\h\7\b\t\w\l\0\f\8\w\r\1\e\0\s\j\f\p\q\f\k\q\3\s\f\n\i\o\1\0\j\k\j\s\e\b\f\2\6\i\4\a\4\k\j\3\t\k\6\u\f\h\x\f\4\h\8\b\t\o\m\f\j\b\3\2\v\6\b\5\q\u\c\j\7\q\j\l\n\2\r\v\4\v\j\8\g\a\8\y\y\1\n\2\o\s\6\r\f\p\t\b\b\u\e\3\e\d\l\7\n\t\f\f\f\l\r\3\3\q\n\1\k\h\z\y\a\o\y\5\q\n\n\3\1\2\n\h\p\v\1\g\4\6\m\f\g\7\k\s\b\7\n\1\o\e\w\h\o\o\i\p\i\9\w\x\j\m\0\i\e\p\k\w\t\w\b\f\q\p\0\i\u\m\e\o\p\o\5\b\q\2\t\1\4\h\x\g\4\t\t\j\0\4\b\5\z\6\t\n\z\e\p\v\n\q\y\q\w\y\0\g\c\f\v\p\s\9\1\f\u\g\d\9\d\2\l\v\e\e\1\n\k\r\v\f\8\n\x\1\7\o\w\g\y\y\g\n\v\7\h\r\e\5\6\s\f\i\q\y\k\u\9\9\7\a\9\x\f\o\v\y\5\x\7\9\t\u\u\2\1\7\h\q\a\a\d\k\n\0\d\g\z\3\6\e\j\r\3\c\3\w\g\1\1\p\v\6\3\m\q\5\c\r\o\g\b\8\q\h\g\t\x\7\g\a\9\6\3\1\9\9\f\i\i\e\z\z\p\l\n\y\3\u\v\3\o\i\l\3\i\y\1\a\z\1\m\w\3\u\w\r\1\s\3\p\g\g\0\h\e\q\6\p\t\i\8\2\9\s\c\x\i\o\7\a\j\v\a\s\k\y\d\4\v\c\b\0\m\4\z\f\i\g\4\0\d\d\6\4\b\c\p\r\n\r\5\m\h\l\1\c\c\q\1\i\e\h\r\5\j\u\l\r\4\d\s\y\8\y\k\j\f\x\d\2\v\t\y\e\i\t\z\i\w\u\h\h\p\2\1\m\w\0\5\g\s\p\o\o\8\i\r\5\q\e\u\8\2\x\q\3\p\3\7\1\h\b\0\v\f\5\1\t\m\u\y\w\f\t\s\b\h\3\1\k\8\n\i\0\2\k\r\p\2\u\m\k\d\2\n\1\5\e\1\7\j\w\j\2\s\1\c\3\j\j\2\w\v\n\z\z\a\j\i\j\r\1\k\7\0\t\z\n\7\6\8\p\c\9\c\v\u\u\d\z\0\o\2\v\m\g\3\4\d\j\a\6\d\i\x\g\3\g\b\x\7\u\6\a\j\g\h\s\i\g\k\b\i\h\o\t\a\c\s\p\f\u\p\b\y\x\x\m\i\k\b\a\k\j\t\r\8\o\o\9\s\z\o\i\1\2\p\1\1\y\a\s\s\c\s\u\o\h\h\c\k\b\7\o\3\q\l\a\2\h\l\d\3\d\3\w\9\h\1\h\0\f\l\w\c\g\g\j\p\s\l\o\o\6\y\q\z\n\4\3\p\9\g\9\u\a\x\o\n\p\3\1\w\4\1\k\w\c\0\5\y\t\k\b\t\v\j\o\r\n\q\r\1\x\r\c\t\i\x\z\3\e\3\w\m\i\w\i\k\n\n\i\o\s\e\d\q\u\x\s\f\t\3\z\9\3\1\t\4\u\q\5\p\5\y\2\5\m\w\6\i\c\l\u\w\6\d\k\7\u\t\j\0\h\s\r\j\d\8\6\3\7\u\s\6\4\a\1\3\h\l\5\9\0\a\x\i\e\u\c\i\r\q\k\s\a\u\d\g\0\s\r\j\6\2\g\z\z\b\u\p\r\r\2\u\1\a\5\g\b\5\p\9\t\6\v\3\i\s\3\m\w\b\j\b\f\e\8\n\o\o\q\i\0\i\i\z\y\j\e\0\n\6\h\3\e\b\r\q\f\p\4\s\y\u\f\n\g\g\w\g\z\p\f\d\s\p\8\d\q\n\i\x\5\l\k\4\v\x\f\v\p\l\k\3\1\x\c\1\e\6\1\o\a\1\z\z\a\s\v\n\f\d\1\s\k\d\c\q\u\v\5\z\y\z\m\y\1\u\k\9\p\3\k\u\u\8\7\5\o\8\y\e\j\f\p\9\s\i\q\f\m\s\i\a\c\r\2\9\3\w\9\7\l\z\v\v\q\e\m\r\8\t\0\3\c\l\1\c\5\9\k\f\9\n\z\y\k\j\4\e\g\v\t\1\k\8\3\9\x\s\a\f\d\r\s\3\1\a\0\z\d\u\n\7\0\8\8\j\w\v\z\v\z\8\9\l\1\t\t\x\q\j\f\1\t\l\4\u\a\9\1\d\i\7\x\u\j\0\5\1\k\i\o\y\u\2\2\w\a\8\p\n\w\1\a\b\0\3\o\h\o\t\8\l\q\l\n\o\5\7\3\w\r\8\0\6\n\z\s\q\o\5\t\m\4\j\r\7\n\d\o\n\f\1\d\1\2\p\l\7\n\i\l\o\b\1\l\0\o\v\j\e\f\h\2\c\s\i\3\p\x\m\9\t\0\3\g\k\u\d\w\d\v\y\3\3\d\n\o\f\l\i\d\f\8\h\f\c\0\j\l\x\1\n\k\7\x\o\u\h\r\c\m\o\3\q\z\t\6\v\j\v\p\a\w\4\j\2\b\8\0\e\c\3\m\7\l\n\r\h\t\r\8\0\0\i\x\e\s\q\h\n\f\e\x\e\e\b\j\c\r\l\c\d\7\n\m\6\1\z\7\s\8\i\j\n\a\3\v\t\6\m\r\8\k\0\j\r\e\f\9\v\h\a\j\u\2\s\g\v\i\e\5\6\f\m\0\a\3\a\d\n\2\5\l\g\a\u\i\q\j\f\z\z\u\o\t\3\c\5\0\a\o\k\p\u\v\7\c\g\0\r\w\u\g\z\b\f\p\r\o\6\0\v\5\3\7\m\r\8\m\j\g\8\f\h\z\m\8\6\d\1\p\k\4\m\7\a\x\q\i\n\p\8\f\f\c\h\7\m\y\q\x\6\c\x\s\4\j\d\q\o\d\n\c\t\4\j\r\8\i\g\z\9\r\k\m\o\8\2\f\h\2\y\3\2\t\a\g\6\u\f\d\e\c\o\c\w\t\i\8\q\u\8\1\9\u\u\3\9\f\9\z\6\7\0\f\g\g\t\n\z\i\5\e\d\y\y\s\l\n\p\7\9\2\d\g\c\r\b\o\2\j\c\a\h\c\a\s\1\3\8\r\7\k\a\h\y\f\3\n\y\n\a\w\q\1\l\u\3\7\d\i\a\5\r\q\v\p\y\3\c\u\g\z\9\r\z\b\8\a\n\4\e\i\k\d\o\a\l\1\b\7\7\4\2\t\l\l\h\3\6\z\o\i\c\3\5\s\b\h\5\j\u\w\8\0\g\a\r\3\g\7\f\y\v\h\7\8\f\f\f\g\m\m\7\c\s\i\a\4\8\o\z\n\8\7\b\n\k\f\t\w\m\p\k\x\q\0\k\u\8\7\a\e\v\g\e\4\4\w\j\c\w\g\i\y\h\d\f\g\t\j\7\l\w\i\i\c\s\4\5\w\v\9\x\p\q\b\r\4\b\z\3\0\f\u\x\p\9\d\n\7\r\5\0\f\l\d\x\c\2\1\n\d\g\q\7\u\8\7\z\e\2\l\z\8\2\9\1\5\b\b\c\d\s\9\o\p\l\5\4\g\r\f\1\d\n\8\s\7\d\n\o\w\7\n\b\x\z\l\i\w\o\u\l\z\7\u\x\y\z\j\7\p\7\6\c\9\h\6\7\f\z\o\i\z\z\m\l\u\q\n\4\c\u\9\n\x\w\a\8\0\a\1\x\y\z\y\d\h\x\v\y\o\i\c\e\b\q\s\m\x\b\r\g\x\t\y\c\1\f\1\u\k\y\v\s\v\6\z\h\t\l\h\z\p\w\r\p\d\c\m\r\s\s\7\f\m\1\f\g\t\b\0\x\t\x\b\x\m\a\n\l\8\p\n\x\o\a\5\x\8\8\7\t\b\7\v\h\4\v\r\p\b\o\g\h\s\5\m\4\g\i\a\g\z\y\r\r\e\t\7\2\9\j\h\s\m\7\a\6\c\i\b\l\7\h\b\2\4\4\t\q\v\p\9\h\z\p\9\u\d\a\r\g\z\o\y\5\9\9\o\x\w\9\d\h\v\w\n\y\i\i\6\d\j\9\u\f\h\x\a\d\9\h\y\6\z\7\f\3\i\s\3\j\b\y\k\a\m\x\f\c\r\o\s\1\v\k\5\b\r\r\4\t\i\r\h\p\e\b\c\o\m\i\w\g\s\k\y\6\o\e\6\c\p\j\f\x\r\g\j\9\b\k\c\y\3\n\b\1\g\t\c\2\b\s\6\x\h\w\z\9\z\1\b\z\6\y\a\p\r\9\4\e\y\w\b\t\p\p\9\k\7\h\w\m\q\m\8\f\j\f\k\7\f\z\z\l\9\z\t\e\z\6\t\n\8\2\k\p\7\t\y\g\v\b\f\4\6\7\a\p\4\l\s\a\n\7\c\l\2\o\v\f\3\o\l\t\n\m\j\z\r\i\r\g\9\c\2\l\g\x\2\y\c\9\d\v\g\5\x\x\6\2\9\j\0\y\g\l\f\6\7\p\u\w\b\l\m\g\a\k\e\i\o\g\2\1\i\f\a\n\t\q\s\j\g\z\z\z\a\b\c\h\f\0\f\s\7\m\2\3\l\p\z\r\z\o\3\u\q\m\2\4\9\6\1\9\o\v\i\f\e\i\p\5\l\l\v\j\h\j\6\k\x\2\5\m\y\w\q\y\o\9\0\e\u\5\m\t\9\k\q\s\h\a\4\7\b\d\o\k\b\f\z\d\q\p\f\v\1\k\d\d\k\6\c\t\y\3\r\d\y\3\5\q\h\9\4\z\3\p\h\m\e\g\n\z\7\7\v\3\g\v\f\c\w\l\a\m\b\n\9\h\1\w\q\g\8\w\i\e\g\o\e\r\5\x\g\h\s\g\2\2\a\p\e\p\u\m\a\x\y\d\w\1\e\5\3\s\j\5\j\0\v\6\0\3\y\n\q\5\f\l\i\5\n\h\o\u\3\b\e\e\q\1\0\a\z\u\n\h\g\f\9\1\9\0\b\c\g\2\6\n\0\e\v\o\s\r\i\r\8\u\w\i\l\m\z\7\9\u\m\k\9\h\j\0\1\m\g\6\0\2\9\s\m\p\p\4\g\q\7\x\m\7\v\u\7\r\l\q\b\c\r\w\0\1\8\9\z\r\k\v\h\c\b\l\m\m\b\3\7\z\l\7\x\b\5\5\v\s\5\q\o\4\m\m\w\1\4\2\4\p\r\v\f\b\9\i\u\o\0\0\4\l\8\k\b\g\c\q\f\z\9\f\o\b\p\0\i\7\w\2\h\v\0\w\8\0\l\a\a\4\o\q\8\t\l\i\0\v\k\w\v\y\k\2\c\5\j\5\z\5\d\l\1\7\c\h\d\n\s\z\k\l\m\x\5\g\2\k\r\g\6\u\e\1\3\u\3\9\m\6\j\v\9\d\z\0\x\y\c\b\a\i\f\s\o\w\5\4\f\g\l\7\c\y\d\c\t\y\0\i\v\3\e\y\u\e\8\9\9\t\a\m\h\g\h\0\n\h\b\r\t\n\5\i\d\f\c\l\f\n\5\e\7\o\t\l\v\d\p\1\k\1\3\l\s\q\a\g\s\g\4\z\6\v\w\1\2\u\a\w\s\b\t\a\g\w\q\c\i\e\p\h\n\1\v\i\l\w\e\9\u\v\w\0\h\e\j\9\1\j\c\w\j\s\l\j\l\h\9\v\e\c\y\3\q\w\0\i\f\8\5\3\v\f\n\7\h\e\a\h\7\t\a\b\l\b\k\7\y\m\0\f\x\h\t\6\z\u\p\a\0\q\h\9\m\j\7\e\p\o\m\d\a\i\b\e\u\o\5\5\m\u\p\t\k\d\r\p\o\3\i\a\8\n\z\g\i\x\z\o\0\y\7\g\0\y\j\y\b\j\g\l\5\2\2\0\n\y\c\j\k\j\q\s\9\w\3\u\c\z\k\w\6\1\0\i\8\u\0\j\y\c\e\r\1\g\v\a\i\w\v\u\d\3\d\r\j\2\l\t\n\8\v\n\6\o\g\6\2\h\3\3\h\l\q\q\x\f\d\q\8\5\r\y\4\z\n\s\2\5\7\t\y\x\s\e\l\t\8\9\s\y\2\i\h\r\6\u\r\l\l\q\c\r\2\b\z\c\t\7\5\8\1\m\j\x\i\9\a\u\7\1\3\8\s\y\b\y\x\8\j\j\q\4\n\f\s\h\d\9\b\8\l\2\0\k\0\r\t\a\v\h\z\9\u\d\b\x\e\s\h\f\k\h\r\u\8\i\k\v\4\u\k\s\d\c\y\l\s\e\6\d\x\l ]] 00:07:04.899 ************************************ 00:07:04.899 END TEST dd_rw_offset 00:07:04.899 ************************************ 00:07:04.899 00:07:04.899 real 0m1.106s 00:07:04.899 user 0m0.783s 00:07:04.899 sys 0m0.203s 00:07:04.899 08:57:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.899 08:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.899 08:57:41 -- dd/basic_rw.sh@1 -- # cleanup 00:07:04.899 08:57:41 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:04.899 08:57:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.899 08:57:41 -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.899 08:57:41 -- dd/common.sh@12 -- # local size=0xffff 00:07:04.899 08:57:41 -- dd/common.sh@14 -- # local bs=1048576 00:07:04.899 08:57:41 -- dd/common.sh@15 -- # local count=1 00:07:04.899 08:57:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.899 08:57:41 -- dd/common.sh@18 -- # gen_conf 00:07:04.899 08:57:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:04.899 08:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:05.159 [2024-11-17 08:57:41.832866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.159 [2024-11-17 08:57:41.833016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58071 ] 00:07:05.159 { 00:07:05.159 "subsystems": [ 00:07:05.159 { 00:07:05.159 "subsystem": "bdev", 00:07:05.159 "config": [ 00:07:05.159 { 00:07:05.159 "params": { 00:07:05.159 "trtype": "pcie", 00:07:05.159 "traddr": "0000:00:06.0", 00:07:05.159 "name": "Nvme0" 00:07:05.159 }, 00:07:05.159 "method": "bdev_nvme_attach_controller" 00:07:05.159 }, 00:07:05.159 { 00:07:05.159 "method": "bdev_wait_for_examine" 00:07:05.159 } 00:07:05.159 ] 00:07:05.159 } 00:07:05.159 ] 00:07:05.159 } 00:07:05.159 [2024-11-17 08:57:41.974724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.159 [2024-11-17 08:57:42.023376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.418  [2024-11-17T08:57:42.348Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.418 00:07:05.418 08:57:42 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.418 00:07:05.418 real 0m15.400s 00:07:05.418 user 0m11.167s 00:07:05.418 sys 0m2.698s 00:07:05.418 08:57:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.418 ************************************ 00:07:05.418 END TEST spdk_dd_basic_rw 00:07:05.418 ************************************ 00:07:05.418 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.679 08:57:42 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.679 08:57:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.679 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.679 ************************************ 00:07:05.679 START TEST spdk_dd_posix 00:07:05.679 ************************************ 00:07:05.679 08:57:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.679 * Looking for test storage... 00:07:05.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.679 08:57:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.679 08:57:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.679 08:57:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.679 08:57:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.679 08:57:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.679 08:57:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.679 08:57:42 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.679 08:57:42 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.679 08:57:42 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.679 08:57:42 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.679 08:57:42 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.679 08:57:42 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.679 08:57:42 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.679 08:57:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.679 08:57:42 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.679 08:57:42 -- scripts/common.sh@344 -- # : 1 00:07:05.679 08:57:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.679 08:57:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.679 08:57:42 -- scripts/common.sh@364 -- # decimal 1 00:07:05.679 08:57:42 -- scripts/common.sh@352 -- # local d=1 00:07:05.679 08:57:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.679 08:57:42 -- scripts/common.sh@354 -- # echo 1 00:07:05.679 08:57:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.679 08:57:42 -- scripts/common.sh@365 -- # decimal 2 00:07:05.679 08:57:42 -- scripts/common.sh@352 -- # local d=2 00:07:05.679 08:57:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.679 08:57:42 -- scripts/common.sh@354 -- # echo 2 00:07:05.679 08:57:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.679 08:57:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.679 08:57:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.679 08:57:42 -- scripts/common.sh@367 -- # return 0 00:07:05.679 08:57:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.679 --rc genhtml_branch_coverage=1 00:07:05.679 --rc genhtml_function_coverage=1 00:07:05.679 --rc genhtml_legend=1 00:07:05.679 --rc geninfo_all_blocks=1 00:07:05.679 --rc geninfo_unexecuted_blocks=1 00:07:05.679 00:07:05.679 ' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.679 --rc genhtml_branch_coverage=1 00:07:05.679 --rc genhtml_function_coverage=1 00:07:05.679 --rc genhtml_legend=1 00:07:05.679 --rc geninfo_all_blocks=1 00:07:05.679 --rc geninfo_unexecuted_blocks=1 00:07:05.679 00:07:05.679 ' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.679 --rc genhtml_branch_coverage=1 00:07:05.679 --rc genhtml_function_coverage=1 00:07:05.679 --rc genhtml_legend=1 00:07:05.679 --rc geninfo_all_blocks=1 00:07:05.679 --rc geninfo_unexecuted_blocks=1 00:07:05.679 00:07:05.679 ' 00:07:05.679 08:57:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.679 --rc genhtml_branch_coverage=1 00:07:05.679 --rc genhtml_function_coverage=1 00:07:05.679 --rc genhtml_legend=1 00:07:05.679 --rc geninfo_all_blocks=1 00:07:05.679 --rc geninfo_unexecuted_blocks=1 00:07:05.679 00:07:05.679 ' 00:07:05.679 08:57:42 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.679 08:57:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.679 08:57:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.679 08:57:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.679 08:57:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.679 08:57:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.679 08:57:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.679 08:57:42 -- paths/export.sh@5 -- # export PATH 00:07:05.679 08:57:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.679 08:57:42 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:05.679 08:57:42 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:05.680 08:57:42 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:05.680 08:57:42 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:05.680 08:57:42 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.680 08:57:42 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.680 08:57:42 -- dd/posix.sh@130 -- # tests 00:07:05.680 08:57:42 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:05.680 * First test run, liburing in use 00:07:05.680 08:57:42 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:05.680 08:57:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.680 08:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.680 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.680 ************************************ 00:07:05.680 START TEST dd_flag_append 00:07:05.680 ************************************ 00:07:05.680 08:57:42 -- common/autotest_common.sh@1114 -- # append 00:07:05.680 08:57:42 -- dd/posix.sh@16 -- # local dump0 00:07:05.680 08:57:42 -- dd/posix.sh@17 -- # local dump1 00:07:05.680 08:57:42 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:05.680 08:57:42 -- dd/common.sh@98 -- # xtrace_disable 00:07:05.680 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.680 08:57:42 -- dd/posix.sh@19 -- # dump0=r69ioyv4ap6nyjjo0qzpa9gh9i7i2byq 00:07:05.680 08:57:42 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:05.680 08:57:42 -- dd/common.sh@98 -- # xtrace_disable 00:07:05.680 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.680 08:57:42 -- dd/posix.sh@20 -- # dump1=evkr9jyh60elkm0asga5tgvutc8034sg 00:07:05.680 08:57:42 -- dd/posix.sh@22 -- # printf %s r69ioyv4ap6nyjjo0qzpa9gh9i7i2byq 00:07:05.680 08:57:42 -- dd/posix.sh@23 -- # printf %s evkr9jyh60elkm0asga5tgvutc8034sg 00:07:05.680 08:57:42 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:05.939 [2024-11-17 08:57:42.617148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.939 [2024-11-17 08:57:42.617264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58137 ] 00:07:05.939 [2024-11-17 08:57:42.758600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.939 [2024-11-17 08:57:42.807450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.939  [2024-11-17T08:57:43.128Z] Copying: 32/32 [B] (average 31 kBps) 00:07:06.198 00:07:06.198 08:57:43 -- dd/posix.sh@27 -- # [[ evkr9jyh60elkm0asga5tgvutc8034sgr69ioyv4ap6nyjjo0qzpa9gh9i7i2byq == \e\v\k\r\9\j\y\h\6\0\e\l\k\m\0\a\s\g\a\5\t\g\v\u\t\c\8\0\3\4\s\g\r\6\9\i\o\y\v\4\a\p\6\n\y\j\j\o\0\q\z\p\a\9\g\h\9\i\7\i\2\b\y\q ]] 00:07:06.198 00:07:06.198 real 0m0.490s 00:07:06.198 user 0m0.268s 00:07:06.198 sys 0m0.097s 00:07:06.198 08:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.198 ************************************ 00:07:06.198 END TEST dd_flag_append 00:07:06.198 ************************************ 00:07:06.198 08:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.198 08:57:43 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:06.198 08:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.198 08:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.198 08:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.198 ************************************ 00:07:06.198 START TEST dd_flag_directory 00:07:06.198 ************************************ 00:07:06.198 08:57:43 -- common/autotest_common.sh@1114 -- # directory 00:07:06.198 08:57:43 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.198 08:57:43 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.198 08:57:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.198 08:57:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.198 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.198 08:57:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.199 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.199 08:57:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.199 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.199 08:57:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.199 08:57:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.199 08:57:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.457 [2024-11-17 08:57:43.164507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.457 [2024-11-17 08:57:43.164658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58163 ] 00:07:06.457 [2024-11-17 08:57:43.302852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.457 [2024-11-17 08:57:43.354866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.716 [2024-11-17 08:57:43.399524] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.716 [2024-11-17 08:57:43.399580] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.716 [2024-11-17 08:57:43.399640] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.716 [2024-11-17 08:57:43.461300] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:06.716 08:57:43 -- common/autotest_common.sh@653 -- # es=236 00:07:06.716 08:57:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.716 08:57:43 -- common/autotest_common.sh@662 -- # es=108 00:07:06.716 08:57:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.716 08:57:43 -- common/autotest_common.sh@670 -- # es=1 00:07:06.716 08:57:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.716 08:57:43 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.716 08:57:43 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.716 08:57:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.716 08:57:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.716 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.716 08:57:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.716 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.716 08:57:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.716 08:57:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.716 08:57:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.716 08:57:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.716 08:57:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.716 [2024-11-17 08:57:43.608936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.716 [2024-11-17 08:57:43.609029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:07:06.975 [2024-11-17 08:57:43.746651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.975 [2024-11-17 08:57:43.795392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.975 [2024-11-17 08:57:43.837666] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.975 [2024-11-17 08:57:43.837717] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.975 [2024-11-17 08:57:43.837747] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.975 [2024-11-17 08:57:43.896941] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.234 08:57:44 -- common/autotest_common.sh@653 -- # es=236 00:07:07.234 08:57:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.234 08:57:44 -- common/autotest_common.sh@662 -- # es=108 00:07:07.234 ************************************ 00:07:07.234 END TEST dd_flag_directory 00:07:07.234 ************************************ 00:07:07.234 08:57:44 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.234 08:57:44 -- common/autotest_common.sh@670 -- # es=1 00:07:07.234 08:57:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.234 00:07:07.234 real 0m0.900s 00:07:07.234 user 0m0.507s 00:07:07.234 sys 0m0.185s 00:07:07.234 08:57:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.234 08:57:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.234 08:57:44 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:07.234 08:57:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.234 08:57:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.234 08:57:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.234 ************************************ 00:07:07.234 START TEST dd_flag_nofollow 00:07:07.234 ************************************ 00:07:07.234 08:57:44 -- common/autotest_common.sh@1114 -- # nofollow 00:07:07.234 08:57:44 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.234 08:57:44 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.234 08:57:44 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.234 08:57:44 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.234 08:57:44 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.234 08:57:44 -- common/autotest_common.sh@650 -- # local es=0 00:07:07.234 08:57:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.234 08:57:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.234 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.234 08:57:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.234 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.234 08:57:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.234 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.234 08:57:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.234 08:57:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.234 08:57:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.234 [2024-11-17 08:57:44.125264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.234 [2024-11-17 08:57:44.125388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58201 ] 00:07:07.493 [2024-11-17 08:57:44.263305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.493 [2024-11-17 08:57:44.314605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.493 [2024-11-17 08:57:44.359216] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.493 [2024-11-17 08:57:44.359260] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.493 [2024-11-17 08:57:44.359289] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.753 [2024-11-17 08:57:44.426066] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.753 08:57:44 -- common/autotest_common.sh@653 -- # es=216 00:07:07.753 08:57:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.753 08:57:44 -- common/autotest_common.sh@662 -- # es=88 00:07:07.753 08:57:44 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.753 08:57:44 -- common/autotest_common.sh@670 -- # es=1 00:07:07.753 08:57:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.753 08:57:44 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.753 08:57:44 -- common/autotest_common.sh@650 -- # local es=0 00:07:07.753 08:57:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.753 08:57:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.753 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.753 08:57:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.753 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.753 08:57:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.753 08:57:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.753 08:57:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.753 08:57:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.753 08:57:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.753 [2024-11-17 08:57:44.589573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.753 [2024-11-17 08:57:44.589698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:07:08.012 [2024-11-17 08:57:44.727377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.012 [2024-11-17 08:57:44.781099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.012 [2024-11-17 08:57:44.824676] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.012 [2024-11-17 08:57:44.824952] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.012 [2024-11-17 08:57:44.825111] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.012 [2024-11-17 08:57:44.889805] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:08.271 08:57:44 -- common/autotest_common.sh@653 -- # es=216 00:07:08.271 08:57:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.271 08:57:44 -- common/autotest_common.sh@662 -- # es=88 00:07:08.271 08:57:44 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:08.271 08:57:44 -- common/autotest_common.sh@670 -- # es=1 00:07:08.271 08:57:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.271 08:57:44 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:08.271 08:57:44 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.271 08:57:44 -- common/autotest_common.sh@10 -- # set +x 00:07:08.271 08:57:45 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.271 [2024-11-17 08:57:45.063399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.271 [2024-11-17 08:57:45.063728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:07:08.271 [2024-11-17 08:57:45.197869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.531 [2024-11-17 08:57:45.247875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.531  [2024-11-17T08:57:45.734Z] Copying: 512/512 [B] (average 500 kBps) 00:07:08.804 00:07:08.804 08:57:45 -- dd/posix.sh@49 -- # [[ ood4pxta8xb9arm5bqsifh34odtmyj2xoeqnyj4nl954d2j626fl7245zoyjcv2s81tlgye9h8hvij4e5agaxm5de3twxkgbkuymf0t1rbeutigqr5lqw4d24x9s90he8xbvudmtw7q8oaagcjmh8ake25tjexb2eioewgda8nuurp2slu9k32zuyyl6l4mohn2clh4ugg9guo166c94tyf5j35ccgxcxp4z9lvunsf9xmjitusw4w98h4ipf0tdkzra25ainpjd7qmg2m1kt3ufwqsa6e6xbtp8olj2sewzboz7nett7zx1demadj5v8p1sfshasgur0pvb9exr1xtew05g02cxbznnu8p3xhnbiwny5983udqwd465pccxr1ny9vg39xrorjiwkjglqyghjgf9xpintcqrxkpaz18weupplczq2ecfm0f8s7y4f4r9w5fo6nysaezc4qewt9o3ieakwmnuect7id3hdafvf9r2ucqqm3568eatfljr == \o\o\d\4\p\x\t\a\8\x\b\9\a\r\m\5\b\q\s\i\f\h\3\4\o\d\t\m\y\j\2\x\o\e\q\n\y\j\4\n\l\9\5\4\d\2\j\6\2\6\f\l\7\2\4\5\z\o\y\j\c\v\2\s\8\1\t\l\g\y\e\9\h\8\h\v\i\j\4\e\5\a\g\a\x\m\5\d\e\3\t\w\x\k\g\b\k\u\y\m\f\0\t\1\r\b\e\u\t\i\g\q\r\5\l\q\w\4\d\2\4\x\9\s\9\0\h\e\8\x\b\v\u\d\m\t\w\7\q\8\o\a\a\g\c\j\m\h\8\a\k\e\2\5\t\j\e\x\b\2\e\i\o\e\w\g\d\a\8\n\u\u\r\p\2\s\l\u\9\k\3\2\z\u\y\y\l\6\l\4\m\o\h\n\2\c\l\h\4\u\g\g\9\g\u\o\1\6\6\c\9\4\t\y\f\5\j\3\5\c\c\g\x\c\x\p\4\z\9\l\v\u\n\s\f\9\x\m\j\i\t\u\s\w\4\w\9\8\h\4\i\p\f\0\t\d\k\z\r\a\2\5\a\i\n\p\j\d\7\q\m\g\2\m\1\k\t\3\u\f\w\q\s\a\6\e\6\x\b\t\p\8\o\l\j\2\s\e\w\z\b\o\z\7\n\e\t\t\7\z\x\1\d\e\m\a\d\j\5\v\8\p\1\s\f\s\h\a\s\g\u\r\0\p\v\b\9\e\x\r\1\x\t\e\w\0\5\g\0\2\c\x\b\z\n\n\u\8\p\3\x\h\n\b\i\w\n\y\5\9\8\3\u\d\q\w\d\4\6\5\p\c\c\x\r\1\n\y\9\v\g\3\9\x\r\o\r\j\i\w\k\j\g\l\q\y\g\h\j\g\f\9\x\p\i\n\t\c\q\r\x\k\p\a\z\1\8\w\e\u\p\p\l\c\z\q\2\e\c\f\m\0\f\8\s\7\y\4\f\4\r\9\w\5\f\o\6\n\y\s\a\e\z\c\4\q\e\w\t\9\o\3\i\e\a\k\w\m\n\u\e\c\t\7\i\d\3\h\d\a\f\v\f\9\r\2\u\c\q\q\m\3\5\6\8\e\a\t\f\l\j\r ]] 00:07:08.804 00:07:08.804 real 0m1.418s 00:07:08.804 user 0m0.799s 00:07:08.804 sys 0m0.287s 00:07:08.804 ************************************ 00:07:08.804 END TEST dd_flag_nofollow 00:07:08.804 ************************************ 00:07:08.804 08:57:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.804 08:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:08.804 08:57:45 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:08.804 08:57:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.804 08:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.804 08:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:08.804 ************************************ 00:07:08.804 START TEST dd_flag_noatime 00:07:08.804 ************************************ 00:07:08.804 08:57:45 -- common/autotest_common.sh@1114 -- # noatime 00:07:08.804 08:57:45 -- dd/posix.sh@53 -- # local atime_if 00:07:08.804 08:57:45 -- dd/posix.sh@54 -- # local atime_of 00:07:08.804 08:57:45 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:08.804 08:57:45 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.804 08:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:08.804 08:57:45 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.804 08:57:45 -- dd/posix.sh@60 -- # atime_if=1731833865 00:07:08.804 08:57:45 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.804 08:57:45 -- dd/posix.sh@61 -- # atime_of=1731833865 00:07:08.804 08:57:45 -- dd/posix.sh@66 -- # sleep 1 00:07:09.781 08:57:46 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.781 [2024-11-17 08:57:46.602615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.781 [2024-11-17 08:57:46.602736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58259 ] 00:07:10.041 [2024-11-17 08:57:46.743423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.041 [2024-11-17 08:57:46.814265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.041  [2024-11-17T08:57:47.230Z] Copying: 512/512 [B] (average 500 kBps) 00:07:10.300 00:07:10.300 08:57:47 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.300 08:57:47 -- dd/posix.sh@69 -- # (( atime_if == 1731833865 )) 00:07:10.300 08:57:47 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.300 08:57:47 -- dd/posix.sh@70 -- # (( atime_of == 1731833865 )) 00:07:10.300 08:57:47 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.300 [2024-11-17 08:57:47.129787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.300 [2024-11-17 08:57:47.129925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:07:10.559 [2024-11-17 08:57:47.266336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.559 [2024-11-17 08:57:47.315147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.559  [2024-11-17T08:57:47.749Z] Copying: 512/512 [B] (average 500 kBps) 00:07:10.819 00:07:10.819 08:57:47 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.819 08:57:47 -- dd/posix.sh@73 -- # (( atime_if < 1731833867 )) 00:07:10.819 00:07:10.819 real 0m2.038s 00:07:10.819 user 0m0.567s 00:07:10.819 sys 0m0.216s 00:07:10.819 ************************************ 00:07:10.819 END TEST dd_flag_noatime 00:07:10.819 ************************************ 00:07:10.819 08:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.819 08:57:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.819 08:57:47 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:10.819 08:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.819 08:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.819 08:57:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.819 ************************************ 00:07:10.819 START TEST dd_flags_misc 00:07:10.819 ************************************ 00:07:10.819 08:57:47 -- common/autotest_common.sh@1114 -- # io 00:07:10.819 08:57:47 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:10.819 08:57:47 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:10.819 08:57:47 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:10.819 08:57:47 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:10.819 08:57:47 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:10.819 08:57:47 -- dd/common.sh@98 -- # xtrace_disable 00:07:10.819 08:57:47 -- common/autotest_common.sh@10 -- # set +x 00:07:10.819 08:57:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.819 08:57:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:10.819 [2024-11-17 08:57:47.682783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.819 [2024-11-17 08:57:47.682905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:07:11.077 [2024-11-17 08:57:47.821930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.077 [2024-11-17 08:57:47.871912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.078  [2024-11-17T08:57:48.266Z] Copying: 512/512 [B] (average 500 kBps) 00:07:11.336 00:07:11.336 08:57:48 -- dd/posix.sh@93 -- # [[ xco62kl1v8xpxv767mqeul1cndg6qu1v3gjrz9cqwvar7uw1zqhq8jwyfx3ersg8i3g3utejqk93rinkt1zr2xc3dfwr54cfy3zvm5me8sw9bd32ok85608ylxxo1m976kq8b8g0k4qd0as32gfsmj181bpurfz66z38z1484vrmogs74b6onxr0c0d0mqddvyzdy6idd1ejzhwkuza4mr1q7fv7m49tbqdbnagie3goh7otndapz43yp2eo2w0f7i81scfbezhat1rgg8vjfsejqobauu3q3tl9b8nkqbd81c03h771kaih5biy75929nl86gyv75rv36zke6kt9hlcfpxwwcrbsrlafhzkmrum5ogqsw0a135i9eok442yepd4wtsfmp2hbvkbcckk56l7brnwet8jdux9tdmaxmmxc3hcay8xugmomom5cx3albhafx0oiboq9tdqyhqwc9lgp4crvtq4p33dwvoj4qzqjrou0thar2h9rfmqom7n == \x\c\o\6\2\k\l\1\v\8\x\p\x\v\7\6\7\m\q\e\u\l\1\c\n\d\g\6\q\u\1\v\3\g\j\r\z\9\c\q\w\v\a\r\7\u\w\1\z\q\h\q\8\j\w\y\f\x\3\e\r\s\g\8\i\3\g\3\u\t\e\j\q\k\9\3\r\i\n\k\t\1\z\r\2\x\c\3\d\f\w\r\5\4\c\f\y\3\z\v\m\5\m\e\8\s\w\9\b\d\3\2\o\k\8\5\6\0\8\y\l\x\x\o\1\m\9\7\6\k\q\8\b\8\g\0\k\4\q\d\0\a\s\3\2\g\f\s\m\j\1\8\1\b\p\u\r\f\z\6\6\z\3\8\z\1\4\8\4\v\r\m\o\g\s\7\4\b\6\o\n\x\r\0\c\0\d\0\m\q\d\d\v\y\z\d\y\6\i\d\d\1\e\j\z\h\w\k\u\z\a\4\m\r\1\q\7\f\v\7\m\4\9\t\b\q\d\b\n\a\g\i\e\3\g\o\h\7\o\t\n\d\a\p\z\4\3\y\p\2\e\o\2\w\0\f\7\i\8\1\s\c\f\b\e\z\h\a\t\1\r\g\g\8\v\j\f\s\e\j\q\o\b\a\u\u\3\q\3\t\l\9\b\8\n\k\q\b\d\8\1\c\0\3\h\7\7\1\k\a\i\h\5\b\i\y\7\5\9\2\9\n\l\8\6\g\y\v\7\5\r\v\3\6\z\k\e\6\k\t\9\h\l\c\f\p\x\w\w\c\r\b\s\r\l\a\f\h\z\k\m\r\u\m\5\o\g\q\s\w\0\a\1\3\5\i\9\e\o\k\4\4\2\y\e\p\d\4\w\t\s\f\m\p\2\h\b\v\k\b\c\c\k\k\5\6\l\7\b\r\n\w\e\t\8\j\d\u\x\9\t\d\m\a\x\m\m\x\c\3\h\c\a\y\8\x\u\g\m\o\m\o\m\5\c\x\3\a\l\b\h\a\f\x\0\o\i\b\o\q\9\t\d\q\y\h\q\w\c\9\l\g\p\4\c\r\v\t\q\4\p\3\3\d\w\v\o\j\4\q\z\q\j\r\o\u\0\t\h\a\r\2\h\9\r\f\m\q\o\m\7\n ]] 00:07:11.336 08:57:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.336 08:57:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:11.336 [2024-11-17 08:57:48.146780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.336 [2024-11-17 08:57:48.146904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58304 ] 00:07:11.598 [2024-11-17 08:57:48.286645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.598 [2024-11-17 08:57:48.334208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.598  [2024-11-17T08:57:48.787Z] Copying: 512/512 [B] (average 500 kBps) 00:07:11.857 00:07:11.858 08:57:48 -- dd/posix.sh@93 -- # [[ xco62kl1v8xpxv767mqeul1cndg6qu1v3gjrz9cqwvar7uw1zqhq8jwyfx3ersg8i3g3utejqk93rinkt1zr2xc3dfwr54cfy3zvm5me8sw9bd32ok85608ylxxo1m976kq8b8g0k4qd0as32gfsmj181bpurfz66z38z1484vrmogs74b6onxr0c0d0mqddvyzdy6idd1ejzhwkuza4mr1q7fv7m49tbqdbnagie3goh7otndapz43yp2eo2w0f7i81scfbezhat1rgg8vjfsejqobauu3q3tl9b8nkqbd81c03h771kaih5biy75929nl86gyv75rv36zke6kt9hlcfpxwwcrbsrlafhzkmrum5ogqsw0a135i9eok442yepd4wtsfmp2hbvkbcckk56l7brnwet8jdux9tdmaxmmxc3hcay8xugmomom5cx3albhafx0oiboq9tdqyhqwc9lgp4crvtq4p33dwvoj4qzqjrou0thar2h9rfmqom7n == \x\c\o\6\2\k\l\1\v\8\x\p\x\v\7\6\7\m\q\e\u\l\1\c\n\d\g\6\q\u\1\v\3\g\j\r\z\9\c\q\w\v\a\r\7\u\w\1\z\q\h\q\8\j\w\y\f\x\3\e\r\s\g\8\i\3\g\3\u\t\e\j\q\k\9\3\r\i\n\k\t\1\z\r\2\x\c\3\d\f\w\r\5\4\c\f\y\3\z\v\m\5\m\e\8\s\w\9\b\d\3\2\o\k\8\5\6\0\8\y\l\x\x\o\1\m\9\7\6\k\q\8\b\8\g\0\k\4\q\d\0\a\s\3\2\g\f\s\m\j\1\8\1\b\p\u\r\f\z\6\6\z\3\8\z\1\4\8\4\v\r\m\o\g\s\7\4\b\6\o\n\x\r\0\c\0\d\0\m\q\d\d\v\y\z\d\y\6\i\d\d\1\e\j\z\h\w\k\u\z\a\4\m\r\1\q\7\f\v\7\m\4\9\t\b\q\d\b\n\a\g\i\e\3\g\o\h\7\o\t\n\d\a\p\z\4\3\y\p\2\e\o\2\w\0\f\7\i\8\1\s\c\f\b\e\z\h\a\t\1\r\g\g\8\v\j\f\s\e\j\q\o\b\a\u\u\3\q\3\t\l\9\b\8\n\k\q\b\d\8\1\c\0\3\h\7\7\1\k\a\i\h\5\b\i\y\7\5\9\2\9\n\l\8\6\g\y\v\7\5\r\v\3\6\z\k\e\6\k\t\9\h\l\c\f\p\x\w\w\c\r\b\s\r\l\a\f\h\z\k\m\r\u\m\5\o\g\q\s\w\0\a\1\3\5\i\9\e\o\k\4\4\2\y\e\p\d\4\w\t\s\f\m\p\2\h\b\v\k\b\c\c\k\k\5\6\l\7\b\r\n\w\e\t\8\j\d\u\x\9\t\d\m\a\x\m\m\x\c\3\h\c\a\y\8\x\u\g\m\o\m\o\m\5\c\x\3\a\l\b\h\a\f\x\0\o\i\b\o\q\9\t\d\q\y\h\q\w\c\9\l\g\p\4\c\r\v\t\q\4\p\3\3\d\w\v\o\j\4\q\z\q\j\r\o\u\0\t\h\a\r\2\h\9\r\f\m\q\o\m\7\n ]] 00:07:11.858 08:57:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.858 08:57:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:11.858 [2024-11-17 08:57:48.625254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.858 [2024-11-17 08:57:48.625375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58312 ] 00:07:11.858 [2024-11-17 08:57:48.762986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.117 [2024-11-17 08:57:48.813578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.117  [2024-11-17T08:57:49.047Z] Copying: 512/512 [B] (average 125 kBps) 00:07:12.117 00:07:12.375 08:57:49 -- dd/posix.sh@93 -- # [[ xco62kl1v8xpxv767mqeul1cndg6qu1v3gjrz9cqwvar7uw1zqhq8jwyfx3ersg8i3g3utejqk93rinkt1zr2xc3dfwr54cfy3zvm5me8sw9bd32ok85608ylxxo1m976kq8b8g0k4qd0as32gfsmj181bpurfz66z38z1484vrmogs74b6onxr0c0d0mqddvyzdy6idd1ejzhwkuza4mr1q7fv7m49tbqdbnagie3goh7otndapz43yp2eo2w0f7i81scfbezhat1rgg8vjfsejqobauu3q3tl9b8nkqbd81c03h771kaih5biy75929nl86gyv75rv36zke6kt9hlcfpxwwcrbsrlafhzkmrum5ogqsw0a135i9eok442yepd4wtsfmp2hbvkbcckk56l7brnwet8jdux9tdmaxmmxc3hcay8xugmomom5cx3albhafx0oiboq9tdqyhqwc9lgp4crvtq4p33dwvoj4qzqjrou0thar2h9rfmqom7n == \x\c\o\6\2\k\l\1\v\8\x\p\x\v\7\6\7\m\q\e\u\l\1\c\n\d\g\6\q\u\1\v\3\g\j\r\z\9\c\q\w\v\a\r\7\u\w\1\z\q\h\q\8\j\w\y\f\x\3\e\r\s\g\8\i\3\g\3\u\t\e\j\q\k\9\3\r\i\n\k\t\1\z\r\2\x\c\3\d\f\w\r\5\4\c\f\y\3\z\v\m\5\m\e\8\s\w\9\b\d\3\2\o\k\8\5\6\0\8\y\l\x\x\o\1\m\9\7\6\k\q\8\b\8\g\0\k\4\q\d\0\a\s\3\2\g\f\s\m\j\1\8\1\b\p\u\r\f\z\6\6\z\3\8\z\1\4\8\4\v\r\m\o\g\s\7\4\b\6\o\n\x\r\0\c\0\d\0\m\q\d\d\v\y\z\d\y\6\i\d\d\1\e\j\z\h\w\k\u\z\a\4\m\r\1\q\7\f\v\7\m\4\9\t\b\q\d\b\n\a\g\i\e\3\g\o\h\7\o\t\n\d\a\p\z\4\3\y\p\2\e\o\2\w\0\f\7\i\8\1\s\c\f\b\e\z\h\a\t\1\r\g\g\8\v\j\f\s\e\j\q\o\b\a\u\u\3\q\3\t\l\9\b\8\n\k\q\b\d\8\1\c\0\3\h\7\7\1\k\a\i\h\5\b\i\y\7\5\9\2\9\n\l\8\6\g\y\v\7\5\r\v\3\6\z\k\e\6\k\t\9\h\l\c\f\p\x\w\w\c\r\b\s\r\l\a\f\h\z\k\m\r\u\m\5\o\g\q\s\w\0\a\1\3\5\i\9\e\o\k\4\4\2\y\e\p\d\4\w\t\s\f\m\p\2\h\b\v\k\b\c\c\k\k\5\6\l\7\b\r\n\w\e\t\8\j\d\u\x\9\t\d\m\a\x\m\m\x\c\3\h\c\a\y\8\x\u\g\m\o\m\o\m\5\c\x\3\a\l\b\h\a\f\x\0\o\i\b\o\q\9\t\d\q\y\h\q\w\c\9\l\g\p\4\c\r\v\t\q\4\p\3\3\d\w\v\o\j\4\q\z\q\j\r\o\u\0\t\h\a\r\2\h\9\r\f\m\q\o\m\7\n ]] 00:07:12.375 08:57:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.375 08:57:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:12.375 [2024-11-17 08:57:49.084360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.375 [2024-11-17 08:57:49.084446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:07:12.375 [2024-11-17 08:57:49.212782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.375 [2024-11-17 08:57:49.267476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.634  [2024-11-17T08:57:49.564Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.634 00:07:12.634 08:57:49 -- dd/posix.sh@93 -- # [[ xco62kl1v8xpxv767mqeul1cndg6qu1v3gjrz9cqwvar7uw1zqhq8jwyfx3ersg8i3g3utejqk93rinkt1zr2xc3dfwr54cfy3zvm5me8sw9bd32ok85608ylxxo1m976kq8b8g0k4qd0as32gfsmj181bpurfz66z38z1484vrmogs74b6onxr0c0d0mqddvyzdy6idd1ejzhwkuza4mr1q7fv7m49tbqdbnagie3goh7otndapz43yp2eo2w0f7i81scfbezhat1rgg8vjfsejqobauu3q3tl9b8nkqbd81c03h771kaih5biy75929nl86gyv75rv36zke6kt9hlcfpxwwcrbsrlafhzkmrum5ogqsw0a135i9eok442yepd4wtsfmp2hbvkbcckk56l7brnwet8jdux9tdmaxmmxc3hcay8xugmomom5cx3albhafx0oiboq9tdqyhqwc9lgp4crvtq4p33dwvoj4qzqjrou0thar2h9rfmqom7n == \x\c\o\6\2\k\l\1\v\8\x\p\x\v\7\6\7\m\q\e\u\l\1\c\n\d\g\6\q\u\1\v\3\g\j\r\z\9\c\q\w\v\a\r\7\u\w\1\z\q\h\q\8\j\w\y\f\x\3\e\r\s\g\8\i\3\g\3\u\t\e\j\q\k\9\3\r\i\n\k\t\1\z\r\2\x\c\3\d\f\w\r\5\4\c\f\y\3\z\v\m\5\m\e\8\s\w\9\b\d\3\2\o\k\8\5\6\0\8\y\l\x\x\o\1\m\9\7\6\k\q\8\b\8\g\0\k\4\q\d\0\a\s\3\2\g\f\s\m\j\1\8\1\b\p\u\r\f\z\6\6\z\3\8\z\1\4\8\4\v\r\m\o\g\s\7\4\b\6\o\n\x\r\0\c\0\d\0\m\q\d\d\v\y\z\d\y\6\i\d\d\1\e\j\z\h\w\k\u\z\a\4\m\r\1\q\7\f\v\7\m\4\9\t\b\q\d\b\n\a\g\i\e\3\g\o\h\7\o\t\n\d\a\p\z\4\3\y\p\2\e\o\2\w\0\f\7\i\8\1\s\c\f\b\e\z\h\a\t\1\r\g\g\8\v\j\f\s\e\j\q\o\b\a\u\u\3\q\3\t\l\9\b\8\n\k\q\b\d\8\1\c\0\3\h\7\7\1\k\a\i\h\5\b\i\y\7\5\9\2\9\n\l\8\6\g\y\v\7\5\r\v\3\6\z\k\e\6\k\t\9\h\l\c\f\p\x\w\w\c\r\b\s\r\l\a\f\h\z\k\m\r\u\m\5\o\g\q\s\w\0\a\1\3\5\i\9\e\o\k\4\4\2\y\e\p\d\4\w\t\s\f\m\p\2\h\b\v\k\b\c\c\k\k\5\6\l\7\b\r\n\w\e\t\8\j\d\u\x\9\t\d\m\a\x\m\m\x\c\3\h\c\a\y\8\x\u\g\m\o\m\o\m\5\c\x\3\a\l\b\h\a\f\x\0\o\i\b\o\q\9\t\d\q\y\h\q\w\c\9\l\g\p\4\c\r\v\t\q\4\p\3\3\d\w\v\o\j\4\q\z\q\j\r\o\u\0\t\h\a\r\2\h\9\r\f\m\q\o\m\7\n ]] 00:07:12.634 08:57:49 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.634 08:57:49 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.634 08:57:49 -- dd/common.sh@98 -- # xtrace_disable 00:07:12.634 08:57:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.634 08:57:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.634 08:57:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.634 [2024-11-17 08:57:49.556459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.634 [2024-11-17 08:57:49.556721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58327 ] 00:07:12.893 [2024-11-17 08:57:49.695910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.893 [2024-11-17 08:57:49.756977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.893  [2024-11-17T08:57:50.082Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.152 00:07:13.152 08:57:49 -- dd/posix.sh@93 -- # [[ gv7rb87yu6zfv9jvu0qzlp0xs602xhc6i3eug40pdecb50fbbz4azwpioazrtrp10f2fzz3ad6hqmfude5d2i808lafgv04d7zg1350bwua3albmijp24n0o8cyadef67nmuwdl2qrd7c4go6zp7npjfj4ruxshan69ui0rqhrea2b78g2xl9785mrudlbuqipipfc37dsriniz5pzh3mye2qafhvv10h2hf5a9yhhhm4l0ywco0x9klnze0be609a89fnzimfa30e57spct1vhrlzbwybos6gtk1oz58gr6s9hnotw3wlevr8k1463i4layny73y8qgm4eg0dw4kzuh7ykm48sxua5hdoqgq2f796ypecyr6ud6xbx2oxzdmmvdvoax9znmv2vc2ltm6yzkosts16dmhem8ixn1d4q08ldsojenpjon719g3xapqygw27m4zwixyd01wzuhgzgt1uxpebfy7m6l8j893mfqba61m3rojq6fvg60z23i == \g\v\7\r\b\8\7\y\u\6\z\f\v\9\j\v\u\0\q\z\l\p\0\x\s\6\0\2\x\h\c\6\i\3\e\u\g\4\0\p\d\e\c\b\5\0\f\b\b\z\4\a\z\w\p\i\o\a\z\r\t\r\p\1\0\f\2\f\z\z\3\a\d\6\h\q\m\f\u\d\e\5\d\2\i\8\0\8\l\a\f\g\v\0\4\d\7\z\g\1\3\5\0\b\w\u\a\3\a\l\b\m\i\j\p\2\4\n\0\o\8\c\y\a\d\e\f\6\7\n\m\u\w\d\l\2\q\r\d\7\c\4\g\o\6\z\p\7\n\p\j\f\j\4\r\u\x\s\h\a\n\6\9\u\i\0\r\q\h\r\e\a\2\b\7\8\g\2\x\l\9\7\8\5\m\r\u\d\l\b\u\q\i\p\i\p\f\c\3\7\d\s\r\i\n\i\z\5\p\z\h\3\m\y\e\2\q\a\f\h\v\v\1\0\h\2\h\f\5\a\9\y\h\h\h\m\4\l\0\y\w\c\o\0\x\9\k\l\n\z\e\0\b\e\6\0\9\a\8\9\f\n\z\i\m\f\a\3\0\e\5\7\s\p\c\t\1\v\h\r\l\z\b\w\y\b\o\s\6\g\t\k\1\o\z\5\8\g\r\6\s\9\h\n\o\t\w\3\w\l\e\v\r\8\k\1\4\6\3\i\4\l\a\y\n\y\7\3\y\8\q\g\m\4\e\g\0\d\w\4\k\z\u\h\7\y\k\m\4\8\s\x\u\a\5\h\d\o\q\g\q\2\f\7\9\6\y\p\e\c\y\r\6\u\d\6\x\b\x\2\o\x\z\d\m\m\v\d\v\o\a\x\9\z\n\m\v\2\v\c\2\l\t\m\6\y\z\k\o\s\t\s\1\6\d\m\h\e\m\8\i\x\n\1\d\4\q\0\8\l\d\s\o\j\e\n\p\j\o\n\7\1\9\g\3\x\a\p\q\y\g\w\2\7\m\4\z\w\i\x\y\d\0\1\w\z\u\h\g\z\g\t\1\u\x\p\e\b\f\y\7\m\6\l\8\j\8\9\3\m\f\q\b\a\6\1\m\3\r\o\j\q\6\f\v\g\6\0\z\2\3\i ]] 00:07:13.152 08:57:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.152 08:57:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:13.152 [2024-11-17 08:57:50.009338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.152 [2024-11-17 08:57:50.009481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58329 ] 00:07:13.410 [2024-11-17 08:57:50.143278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.410 [2024-11-17 08:57:50.203140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.410  [2024-11-17T08:57:50.599Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.669 00:07:13.669 08:57:50 -- dd/posix.sh@93 -- # [[ gv7rb87yu6zfv9jvu0qzlp0xs602xhc6i3eug40pdecb50fbbz4azwpioazrtrp10f2fzz3ad6hqmfude5d2i808lafgv04d7zg1350bwua3albmijp24n0o8cyadef67nmuwdl2qrd7c4go6zp7npjfj4ruxshan69ui0rqhrea2b78g2xl9785mrudlbuqipipfc37dsriniz5pzh3mye2qafhvv10h2hf5a9yhhhm4l0ywco0x9klnze0be609a89fnzimfa30e57spct1vhrlzbwybos6gtk1oz58gr6s9hnotw3wlevr8k1463i4layny73y8qgm4eg0dw4kzuh7ykm48sxua5hdoqgq2f796ypecyr6ud6xbx2oxzdmmvdvoax9znmv2vc2ltm6yzkosts16dmhem8ixn1d4q08ldsojenpjon719g3xapqygw27m4zwixyd01wzuhgzgt1uxpebfy7m6l8j893mfqba61m3rojq6fvg60z23i == \g\v\7\r\b\8\7\y\u\6\z\f\v\9\j\v\u\0\q\z\l\p\0\x\s\6\0\2\x\h\c\6\i\3\e\u\g\4\0\p\d\e\c\b\5\0\f\b\b\z\4\a\z\w\p\i\o\a\z\r\t\r\p\1\0\f\2\f\z\z\3\a\d\6\h\q\m\f\u\d\e\5\d\2\i\8\0\8\l\a\f\g\v\0\4\d\7\z\g\1\3\5\0\b\w\u\a\3\a\l\b\m\i\j\p\2\4\n\0\o\8\c\y\a\d\e\f\6\7\n\m\u\w\d\l\2\q\r\d\7\c\4\g\o\6\z\p\7\n\p\j\f\j\4\r\u\x\s\h\a\n\6\9\u\i\0\r\q\h\r\e\a\2\b\7\8\g\2\x\l\9\7\8\5\m\r\u\d\l\b\u\q\i\p\i\p\f\c\3\7\d\s\r\i\n\i\z\5\p\z\h\3\m\y\e\2\q\a\f\h\v\v\1\0\h\2\h\f\5\a\9\y\h\h\h\m\4\l\0\y\w\c\o\0\x\9\k\l\n\z\e\0\b\e\6\0\9\a\8\9\f\n\z\i\m\f\a\3\0\e\5\7\s\p\c\t\1\v\h\r\l\z\b\w\y\b\o\s\6\g\t\k\1\o\z\5\8\g\r\6\s\9\h\n\o\t\w\3\w\l\e\v\r\8\k\1\4\6\3\i\4\l\a\y\n\y\7\3\y\8\q\g\m\4\e\g\0\d\w\4\k\z\u\h\7\y\k\m\4\8\s\x\u\a\5\h\d\o\q\g\q\2\f\7\9\6\y\p\e\c\y\r\6\u\d\6\x\b\x\2\o\x\z\d\m\m\v\d\v\o\a\x\9\z\n\m\v\2\v\c\2\l\t\m\6\y\z\k\o\s\t\s\1\6\d\m\h\e\m\8\i\x\n\1\d\4\q\0\8\l\d\s\o\j\e\n\p\j\o\n\7\1\9\g\3\x\a\p\q\y\g\w\2\7\m\4\z\w\i\x\y\d\0\1\w\z\u\h\g\z\g\t\1\u\x\p\e\b\f\y\7\m\6\l\8\j\8\9\3\m\f\q\b\a\6\1\m\3\r\o\j\q\6\f\v\g\6\0\z\2\3\i ]] 00:07:13.669 08:57:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.669 08:57:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.669 [2024-11-17 08:57:50.500091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.669 [2024-11-17 08:57:50.500348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58336 ] 00:07:13.928 [2024-11-17 08:57:50.637214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.928 [2024-11-17 08:57:50.692684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.928  [2024-11-17T08:57:51.117Z] Copying: 512/512 [B] (average 166 kBps) 00:07:14.187 00:07:14.187 08:57:50 -- dd/posix.sh@93 -- # [[ gv7rb87yu6zfv9jvu0qzlp0xs602xhc6i3eug40pdecb50fbbz4azwpioazrtrp10f2fzz3ad6hqmfude5d2i808lafgv04d7zg1350bwua3albmijp24n0o8cyadef67nmuwdl2qrd7c4go6zp7npjfj4ruxshan69ui0rqhrea2b78g2xl9785mrudlbuqipipfc37dsriniz5pzh3mye2qafhvv10h2hf5a9yhhhm4l0ywco0x9klnze0be609a89fnzimfa30e57spct1vhrlzbwybos6gtk1oz58gr6s9hnotw3wlevr8k1463i4layny73y8qgm4eg0dw4kzuh7ykm48sxua5hdoqgq2f796ypecyr6ud6xbx2oxzdmmvdvoax9znmv2vc2ltm6yzkosts16dmhem8ixn1d4q08ldsojenpjon719g3xapqygw27m4zwixyd01wzuhgzgt1uxpebfy7m6l8j893mfqba61m3rojq6fvg60z23i == \g\v\7\r\b\8\7\y\u\6\z\f\v\9\j\v\u\0\q\z\l\p\0\x\s\6\0\2\x\h\c\6\i\3\e\u\g\4\0\p\d\e\c\b\5\0\f\b\b\z\4\a\z\w\p\i\o\a\z\r\t\r\p\1\0\f\2\f\z\z\3\a\d\6\h\q\m\f\u\d\e\5\d\2\i\8\0\8\l\a\f\g\v\0\4\d\7\z\g\1\3\5\0\b\w\u\a\3\a\l\b\m\i\j\p\2\4\n\0\o\8\c\y\a\d\e\f\6\7\n\m\u\w\d\l\2\q\r\d\7\c\4\g\o\6\z\p\7\n\p\j\f\j\4\r\u\x\s\h\a\n\6\9\u\i\0\r\q\h\r\e\a\2\b\7\8\g\2\x\l\9\7\8\5\m\r\u\d\l\b\u\q\i\p\i\p\f\c\3\7\d\s\r\i\n\i\z\5\p\z\h\3\m\y\e\2\q\a\f\h\v\v\1\0\h\2\h\f\5\a\9\y\h\h\h\m\4\l\0\y\w\c\o\0\x\9\k\l\n\z\e\0\b\e\6\0\9\a\8\9\f\n\z\i\m\f\a\3\0\e\5\7\s\p\c\t\1\v\h\r\l\z\b\w\y\b\o\s\6\g\t\k\1\o\z\5\8\g\r\6\s\9\h\n\o\t\w\3\w\l\e\v\r\8\k\1\4\6\3\i\4\l\a\y\n\y\7\3\y\8\q\g\m\4\e\g\0\d\w\4\k\z\u\h\7\y\k\m\4\8\s\x\u\a\5\h\d\o\q\g\q\2\f\7\9\6\y\p\e\c\y\r\6\u\d\6\x\b\x\2\o\x\z\d\m\m\v\d\v\o\a\x\9\z\n\m\v\2\v\c\2\l\t\m\6\y\z\k\o\s\t\s\1\6\d\m\h\e\m\8\i\x\n\1\d\4\q\0\8\l\d\s\o\j\e\n\p\j\o\n\7\1\9\g\3\x\a\p\q\y\g\w\2\7\m\4\z\w\i\x\y\d\0\1\w\z\u\h\g\z\g\t\1\u\x\p\e\b\f\y\7\m\6\l\8\j\8\9\3\m\f\q\b\a\6\1\m\3\r\o\j\q\6\f\v\g\6\0\z\2\3\i ]] 00:07:14.187 08:57:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.187 08:57:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.187 [2024-11-17 08:57:50.982071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.187 [2024-11-17 08:57:50.982170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58344 ] 00:07:14.446 [2024-11-17 08:57:51.120309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.446 [2024-11-17 08:57:51.186224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.446  [2024-11-17T08:57:51.666Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.736 00:07:14.736 08:57:51 -- dd/posix.sh@93 -- # [[ gv7rb87yu6zfv9jvu0qzlp0xs602xhc6i3eug40pdecb50fbbz4azwpioazrtrp10f2fzz3ad6hqmfude5d2i808lafgv04d7zg1350bwua3albmijp24n0o8cyadef67nmuwdl2qrd7c4go6zp7npjfj4ruxshan69ui0rqhrea2b78g2xl9785mrudlbuqipipfc37dsriniz5pzh3mye2qafhvv10h2hf5a9yhhhm4l0ywco0x9klnze0be609a89fnzimfa30e57spct1vhrlzbwybos6gtk1oz58gr6s9hnotw3wlevr8k1463i4layny73y8qgm4eg0dw4kzuh7ykm48sxua5hdoqgq2f796ypecyr6ud6xbx2oxzdmmvdvoax9znmv2vc2ltm6yzkosts16dmhem8ixn1d4q08ldsojenpjon719g3xapqygw27m4zwixyd01wzuhgzgt1uxpebfy7m6l8j893mfqba61m3rojq6fvg60z23i == \g\v\7\r\b\8\7\y\u\6\z\f\v\9\j\v\u\0\q\z\l\p\0\x\s\6\0\2\x\h\c\6\i\3\e\u\g\4\0\p\d\e\c\b\5\0\f\b\b\z\4\a\z\w\p\i\o\a\z\r\t\r\p\1\0\f\2\f\z\z\3\a\d\6\h\q\m\f\u\d\e\5\d\2\i\8\0\8\l\a\f\g\v\0\4\d\7\z\g\1\3\5\0\b\w\u\a\3\a\l\b\m\i\j\p\2\4\n\0\o\8\c\y\a\d\e\f\6\7\n\m\u\w\d\l\2\q\r\d\7\c\4\g\o\6\z\p\7\n\p\j\f\j\4\r\u\x\s\h\a\n\6\9\u\i\0\r\q\h\r\e\a\2\b\7\8\g\2\x\l\9\7\8\5\m\r\u\d\l\b\u\q\i\p\i\p\f\c\3\7\d\s\r\i\n\i\z\5\p\z\h\3\m\y\e\2\q\a\f\h\v\v\1\0\h\2\h\f\5\a\9\y\h\h\h\m\4\l\0\y\w\c\o\0\x\9\k\l\n\z\e\0\b\e\6\0\9\a\8\9\f\n\z\i\m\f\a\3\0\e\5\7\s\p\c\t\1\v\h\r\l\z\b\w\y\b\o\s\6\g\t\k\1\o\z\5\8\g\r\6\s\9\h\n\o\t\w\3\w\l\e\v\r\8\k\1\4\6\3\i\4\l\a\y\n\y\7\3\y\8\q\g\m\4\e\g\0\d\w\4\k\z\u\h\7\y\k\m\4\8\s\x\u\a\5\h\d\o\q\g\q\2\f\7\9\6\y\p\e\c\y\r\6\u\d\6\x\b\x\2\o\x\z\d\m\m\v\d\v\o\a\x\9\z\n\m\v\2\v\c\2\l\t\m\6\y\z\k\o\s\t\s\1\6\d\m\h\e\m\8\i\x\n\1\d\4\q\0\8\l\d\s\o\j\e\n\p\j\o\n\7\1\9\g\3\x\a\p\q\y\g\w\2\7\m\4\z\w\i\x\y\d\0\1\w\z\u\h\g\z\g\t\1\u\x\p\e\b\f\y\7\m\6\l\8\j\8\9\3\m\f\q\b\a\6\1\m\3\r\o\j\q\6\f\v\g\6\0\z\2\3\i ]] 00:07:14.736 00:07:14.736 real 0m3.791s 00:07:14.736 user 0m2.032s 00:07:14.736 sys 0m0.777s 00:07:14.736 08:57:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.736 ************************************ 00:07:14.736 END TEST dd_flags_misc 00:07:14.736 ************************************ 00:07:14.736 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.736 08:57:51 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:14.736 08:57:51 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:14.736 * Second test run, disabling liburing, forcing AIO 00:07:14.736 08:57:51 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:14.736 08:57:51 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:14.736 08:57:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.736 08:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.736 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.736 ************************************ 00:07:14.736 START TEST dd_flag_append_forced_aio 00:07:14.736 ************************************ 00:07:14.736 08:57:51 -- common/autotest_common.sh@1114 -- # append 00:07:14.736 08:57:51 -- dd/posix.sh@16 -- # local dump0 00:07:14.736 08:57:51 -- dd/posix.sh@17 -- # local dump1 00:07:14.736 08:57:51 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:14.736 08:57:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:14.736 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.736 08:57:51 -- dd/posix.sh@19 -- # dump0=6mi24plfjv1g1nx2nwv4wo3o4djfd1rv 00:07:14.736 08:57:51 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:14.736 08:57:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:14.736 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.736 08:57:51 -- dd/posix.sh@20 -- # dump1=wgqsyt4e5illwxvk24l9bhgdb6i15nyj 00:07:14.736 08:57:51 -- dd/posix.sh@22 -- # printf %s 6mi24plfjv1g1nx2nwv4wo3o4djfd1rv 00:07:14.736 08:57:51 -- dd/posix.sh@23 -- # printf %s wgqsyt4e5illwxvk24l9bhgdb6i15nyj 00:07:14.736 08:57:51 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:14.736 [2024-11-17 08:57:51.512004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.736 [2024-11-17 08:57:51.512103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:07:14.736 [2024-11-17 08:57:51.642565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.996 [2024-11-17 08:57:51.692718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.996  [2024-11-17T08:57:51.926Z] Copying: 32/32 [B] (average 31 kBps) 00:07:14.996 00:07:14.996 08:57:51 -- dd/posix.sh@27 -- # [[ wgqsyt4e5illwxvk24l9bhgdb6i15nyj6mi24plfjv1g1nx2nwv4wo3o4djfd1rv == \w\g\q\s\y\t\4\e\5\i\l\l\w\x\v\k\2\4\l\9\b\h\g\d\b\6\i\1\5\n\y\j\6\m\i\2\4\p\l\f\j\v\1\g\1\n\x\2\n\w\v\4\w\o\3\o\4\d\j\f\d\1\r\v ]] 00:07:14.996 00:07:14.996 real 0m0.452s 00:07:14.996 user 0m0.239s 00:07:14.996 sys 0m0.092s 00:07:14.996 08:57:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.996 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.996 ************************************ 00:07:14.996 END TEST dd_flag_append_forced_aio 00:07:14.996 ************************************ 00:07:15.256 08:57:51 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:15.256 08:57:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.256 08:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.256 08:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:15.256 ************************************ 00:07:15.256 START TEST dd_flag_directory_forced_aio 00:07:15.256 ************************************ 00:07:15.256 08:57:51 -- common/autotest_common.sh@1114 -- # directory 00:07:15.256 08:57:51 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.256 08:57:51 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.256 08:57:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.256 08:57:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.256 08:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.256 08:57:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.256 08:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.256 08:57:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.256 08:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.256 08:57:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.256 08:57:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.256 08:57:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.256 [2024-11-17 08:57:52.027816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.256 [2024-11-17 08:57:52.027914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58397 ] 00:07:15.256 [2024-11-17 08:57:52.163613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.515 [2024-11-17 08:57:52.216540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.516 [2024-11-17 08:57:52.261221] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.516 [2024-11-17 08:57:52.261275] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.516 [2024-11-17 08:57:52.261318] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.516 [2024-11-17 08:57:52.323152] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:15.516 08:57:52 -- common/autotest_common.sh@653 -- # es=236 00:07:15.516 08:57:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.516 08:57:52 -- common/autotest_common.sh@662 -- # es=108 00:07:15.516 08:57:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.516 08:57:52 -- common/autotest_common.sh@670 -- # es=1 00:07:15.516 08:57:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.516 08:57:52 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.516 08:57:52 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.516 08:57:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.516 08:57:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.516 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.516 08:57:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.516 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.516 08:57:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.516 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.516 08:57:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.516 08:57:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.516 08:57:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.775 [2024-11-17 08:57:52.485847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.775 [2024-11-17 08:57:52.486293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:07:15.775 [2024-11-17 08:57:52.620344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.775 [2024-11-17 08:57:52.672945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.035 [2024-11-17 08:57:52.720243] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:16.035 [2024-11-17 08:57:52.720527] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:16.035 [2024-11-17 08:57:52.720561] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.035 [2024-11-17 08:57:52.775459] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:16.035 08:57:52 -- common/autotest_common.sh@653 -- # es=236 00:07:16.035 08:57:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.035 08:57:52 -- common/autotest_common.sh@662 -- # es=108 00:07:16.035 08:57:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:16.035 08:57:52 -- common/autotest_common.sh@670 -- # es=1 00:07:16.035 08:57:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.035 00:07:16.035 real 0m0.892s 00:07:16.035 user 0m0.490s 00:07:16.035 sys 0m0.192s 00:07:16.035 ************************************ 00:07:16.035 END TEST dd_flag_directory_forced_aio 00:07:16.035 08:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.035 08:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.035 ************************************ 00:07:16.035 08:57:52 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:16.035 08:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.035 08:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.035 08:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.035 ************************************ 00:07:16.035 START TEST dd_flag_nofollow_forced_aio 00:07:16.035 ************************************ 00:07:16.035 08:57:52 -- common/autotest_common.sh@1114 -- # nofollow 00:07:16.035 08:57:52 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:16.035 08:57:52 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:16.035 08:57:52 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:16.035 08:57:52 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:16.035 08:57:52 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.035 08:57:52 -- common/autotest_common.sh@650 -- # local es=0 00:07:16.035 08:57:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.035 08:57:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.035 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.035 08:57:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.035 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.035 08:57:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.035 08:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.035 08:57:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.035 08:57:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.035 08:57:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.295 [2024-11-17 08:57:52.982970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.295 [2024-11-17 08:57:52.983067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58435 ] 00:07:16.295 [2024-11-17 08:57:53.121723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.295 [2024-11-17 08:57:53.176764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.554 [2024-11-17 08:57:53.230367] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:16.554 [2024-11-17 08:57:53.230413] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:16.554 [2024-11-17 08:57:53.230442] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.554 [2024-11-17 08:57:53.288266] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:16.554 08:57:53 -- common/autotest_common.sh@653 -- # es=216 00:07:16.554 08:57:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.554 08:57:53 -- common/autotest_common.sh@662 -- # es=88 00:07:16.554 08:57:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:16.554 08:57:53 -- common/autotest_common.sh@670 -- # es=1 00:07:16.554 08:57:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.554 08:57:53 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.554 08:57:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:16.554 08:57:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.554 08:57:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.554 08:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.554 08:57:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.554 08:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.554 08:57:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.554 08:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.554 08:57:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.554 08:57:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.554 08:57:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.554 [2024-11-17 08:57:53.415881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.554 [2024-11-17 08:57:53.415946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:07:16.814 [2024-11-17 08:57:53.544556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.814 [2024-11-17 08:57:53.603713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.814 [2024-11-17 08:57:53.653666] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:16.814 [2024-11-17 08:57:53.653725] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:16.814 [2024-11-17 08:57:53.653741] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.814 [2024-11-17 08:57:53.716787] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:17.073 08:57:53 -- common/autotest_common.sh@653 -- # es=216 00:07:17.073 08:57:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.073 08:57:53 -- common/autotest_common.sh@662 -- # es=88 00:07:17.073 08:57:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:17.073 08:57:53 -- common/autotest_common.sh@670 -- # es=1 00:07:17.073 08:57:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.073 08:57:53 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:17.073 08:57:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:17.073 08:57:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.073 08:57:53 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.073 [2024-11-17 08:57:53.887795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.073 [2024-11-17 08:57:53.888052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58452 ] 00:07:17.333 [2024-11-17 08:57:54.027322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.333 [2024-11-17 08:57:54.091809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.333  [2024-11-17T08:57:54.522Z] Copying: 512/512 [B] (average 500 kBps) 00:07:17.592 00:07:17.592 ************************************ 00:07:17.592 END TEST dd_flag_nofollow_forced_aio 00:07:17.592 ************************************ 00:07:17.592 08:57:54 -- dd/posix.sh@49 -- # [[ 7etnsylzam11o2szbskyk46t3o2twtyfkdc58gsqwjmastbyi6b46a10t2mxv2htmyhebtqc4kavspf6u6rawsaofwtahpqbbggfsjdwbrbyxl884m3v6jgqcyd0g507142eayctky5d2t41on91licazymyv28t891fgoe0cqrlwfdatr380wxm2v63pc19hzcha2vzb96rk637okle5sxwbg7c1du3oppwf44huxdch75vzu7g0d8dmn8s3bqrbzlcw4eaaqxd9asc3s1ehf7135p6m7n49kkna36bp3j10h32f2dnxrqyas2upmqb7mne17ln7pqrzbryc807k2vwkzqwgtdbqroe1j7fa0ktu0s5ot8zoxnps702rnd6mm784o958jfmkw7izguvrs3o8wtde6q3b7e057oiza3gqumcfd3rmt3gajyk60ohthsg2it3hy6feja87crpm2udywu4zbgr57cdtf0lmujix2hy1i26lccz2mjgchkp == \7\e\t\n\s\y\l\z\a\m\1\1\o\2\s\z\b\s\k\y\k\4\6\t\3\o\2\t\w\t\y\f\k\d\c\5\8\g\s\q\w\j\m\a\s\t\b\y\i\6\b\4\6\a\1\0\t\2\m\x\v\2\h\t\m\y\h\e\b\t\q\c\4\k\a\v\s\p\f\6\u\6\r\a\w\s\a\o\f\w\t\a\h\p\q\b\b\g\g\f\s\j\d\w\b\r\b\y\x\l\8\8\4\m\3\v\6\j\g\q\c\y\d\0\g\5\0\7\1\4\2\e\a\y\c\t\k\y\5\d\2\t\4\1\o\n\9\1\l\i\c\a\z\y\m\y\v\2\8\t\8\9\1\f\g\o\e\0\c\q\r\l\w\f\d\a\t\r\3\8\0\w\x\m\2\v\6\3\p\c\1\9\h\z\c\h\a\2\v\z\b\9\6\r\k\6\3\7\o\k\l\e\5\s\x\w\b\g\7\c\1\d\u\3\o\p\p\w\f\4\4\h\u\x\d\c\h\7\5\v\z\u\7\g\0\d\8\d\m\n\8\s\3\b\q\r\b\z\l\c\w\4\e\a\a\q\x\d\9\a\s\c\3\s\1\e\h\f\7\1\3\5\p\6\m\7\n\4\9\k\k\n\a\3\6\b\p\3\j\1\0\h\3\2\f\2\d\n\x\r\q\y\a\s\2\u\p\m\q\b\7\m\n\e\1\7\l\n\7\p\q\r\z\b\r\y\c\8\0\7\k\2\v\w\k\z\q\w\g\t\d\b\q\r\o\e\1\j\7\f\a\0\k\t\u\0\s\5\o\t\8\z\o\x\n\p\s\7\0\2\r\n\d\6\m\m\7\8\4\o\9\5\8\j\f\m\k\w\7\i\z\g\u\v\r\s\3\o\8\w\t\d\e\6\q\3\b\7\e\0\5\7\o\i\z\a\3\g\q\u\m\c\f\d\3\r\m\t\3\g\a\j\y\k\6\0\o\h\t\h\s\g\2\i\t\3\h\y\6\f\e\j\a\8\7\c\r\p\m\2\u\d\y\w\u\4\z\b\g\r\5\7\c\d\t\f\0\l\m\u\j\i\x\2\h\y\1\i\2\6\l\c\c\z\2\m\j\g\c\h\k\p ]] 00:07:17.592 00:07:17.592 real 0m1.414s 00:07:17.592 user 0m0.759s 00:07:17.592 sys 0m0.320s 00:07:17.592 08:57:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.592 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.592 08:57:54 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:17.592 08:57:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.592 08:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.592 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.592 ************************************ 00:07:17.592 START TEST dd_flag_noatime_forced_aio 00:07:17.592 ************************************ 00:07:17.592 08:57:54 -- common/autotest_common.sh@1114 -- # noatime 00:07:17.592 08:57:54 -- dd/posix.sh@53 -- # local atime_if 00:07:17.592 08:57:54 -- dd/posix.sh@54 -- # local atime_of 00:07:17.592 08:57:54 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:17.592 08:57:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:17.592 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.592 08:57:54 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.592 08:57:54 -- dd/posix.sh@60 -- # atime_if=1731833874 00:07:17.592 08:57:54 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.592 08:57:54 -- dd/posix.sh@61 -- # atime_of=1731833874 00:07:17.592 08:57:54 -- dd/posix.sh@66 -- # sleep 1 00:07:18.530 08:57:55 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.790 [2024-11-17 08:57:55.461769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.790 [2024-11-17 08:57:55.461874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58492 ] 00:07:18.790 [2024-11-17 08:57:55.598095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.790 [2024-11-17 08:57:55.668480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.049  [2024-11-17T08:57:55.979Z] Copying: 512/512 [B] (average 500 kBps) 00:07:19.049 00:07:19.049 08:57:55 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.049 08:57:55 -- dd/posix.sh@69 -- # (( atime_if == 1731833874 )) 00:07:19.049 08:57:55 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.049 08:57:55 -- dd/posix.sh@70 -- # (( atime_of == 1731833874 )) 00:07:19.049 08:57:55 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.049 [2024-11-17 08:57:55.968991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.049 [2024-11-17 08:57:55.969237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58504 ] 00:07:19.309 [2024-11-17 08:57:56.106857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.309 [2024-11-17 08:57:56.174162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.309  [2024-11-17T08:57:56.498Z] Copying: 512/512 [B] (average 500 kBps) 00:07:19.568 00:07:19.568 08:57:56 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.568 ************************************ 00:07:19.568 END TEST dd_flag_noatime_forced_aio 00:07:19.568 ************************************ 00:07:19.568 08:57:56 -- dd/posix.sh@73 -- # (( atime_if < 1731833876 )) 00:07:19.568 00:07:19.568 real 0m2.046s 00:07:19.568 user 0m0.568s 00:07:19.568 sys 0m0.238s 00:07:19.568 08:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.568 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 08:57:56 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:19.568 08:57:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.568 08:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.568 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 ************************************ 00:07:19.568 START TEST dd_flags_misc_forced_aio 00:07:19.568 ************************************ 00:07:19.568 08:57:56 -- common/autotest_common.sh@1114 -- # io 00:07:19.568 08:57:56 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:19.568 08:57:56 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:19.568 08:57:56 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:19.568 08:57:56 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:19.568 08:57:56 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:19.568 08:57:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:19.568 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 08:57:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.568 08:57:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:19.828 [2024-11-17 08:57:56.542252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.828 [2024-11-17 08:57:56.542513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58530 ] 00:07:19.828 [2024-11-17 08:57:56.682234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.828 [2024-11-17 08:57:56.733604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.087  [2024-11-17T08:57:57.017Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.087 00:07:20.087 08:57:56 -- dd/posix.sh@93 -- # [[ ddurbtq11eobe0k953ce6xssny86hqyfw8ds0diihkbvo5lf61yv3o7bsvbqaibayz6lrtmhgbf2d8s3m3gvonb95u0l8q70ood8luqd240ow5vyqd9jmnr1tqe6bemyqsd21te63ofwdijbre0wpvyh7duai7akzpob9vmihsvz8h1abeobfg39rgy1t0jdslvrhbr9kf9pn7cr9j2lch9i59br0cc5b61ed8lcdnr1dtko3773o4msnx9y0oq5b4my7b5ennw3loztqfq3ngqode4649oojjcgji0uktic83alj05sk0xryz6oxzb9i3o2dmv192bgmconlldcbhbrgpf069vcegkzzvxywwu6e4x4sxnxj96wei9e7p0u8jdz0ai0oqb4r84xzvmdvh973378coix6j8asuh9gufr5zsgt7bzdr083dk14wvrpoxu77j9kokoq3wm0x5l12rrjg5bxjqm4blia35vnuk0eg5ro18op9s1tvbzb8zr == \d\d\u\r\b\t\q\1\1\e\o\b\e\0\k\9\5\3\c\e\6\x\s\s\n\y\8\6\h\q\y\f\w\8\d\s\0\d\i\i\h\k\b\v\o\5\l\f\6\1\y\v\3\o\7\b\s\v\b\q\a\i\b\a\y\z\6\l\r\t\m\h\g\b\f\2\d\8\s\3\m\3\g\v\o\n\b\9\5\u\0\l\8\q\7\0\o\o\d\8\l\u\q\d\2\4\0\o\w\5\v\y\q\d\9\j\m\n\r\1\t\q\e\6\b\e\m\y\q\s\d\2\1\t\e\6\3\o\f\w\d\i\j\b\r\e\0\w\p\v\y\h\7\d\u\a\i\7\a\k\z\p\o\b\9\v\m\i\h\s\v\z\8\h\1\a\b\e\o\b\f\g\3\9\r\g\y\1\t\0\j\d\s\l\v\r\h\b\r\9\k\f\9\p\n\7\c\r\9\j\2\l\c\h\9\i\5\9\b\r\0\c\c\5\b\6\1\e\d\8\l\c\d\n\r\1\d\t\k\o\3\7\7\3\o\4\m\s\n\x\9\y\0\o\q\5\b\4\m\y\7\b\5\e\n\n\w\3\l\o\z\t\q\f\q\3\n\g\q\o\d\e\4\6\4\9\o\o\j\j\c\g\j\i\0\u\k\t\i\c\8\3\a\l\j\0\5\s\k\0\x\r\y\z\6\o\x\z\b\9\i\3\o\2\d\m\v\1\9\2\b\g\m\c\o\n\l\l\d\c\b\h\b\r\g\p\f\0\6\9\v\c\e\g\k\z\z\v\x\y\w\w\u\6\e\4\x\4\s\x\n\x\j\9\6\w\e\i\9\e\7\p\0\u\8\j\d\z\0\a\i\0\o\q\b\4\r\8\4\x\z\v\m\d\v\h\9\7\3\3\7\8\c\o\i\x\6\j\8\a\s\u\h\9\g\u\f\r\5\z\s\g\t\7\b\z\d\r\0\8\3\d\k\1\4\w\v\r\p\o\x\u\7\7\j\9\k\o\k\o\q\3\w\m\0\x\5\l\1\2\r\r\j\g\5\b\x\j\q\m\4\b\l\i\a\3\5\v\n\u\k\0\e\g\5\r\o\1\8\o\p\9\s\1\t\v\b\z\b\8\z\r ]] 00:07:20.087 08:57:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.087 08:57:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:20.087 [2024-11-17 08:57:56.992365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.087 [2024-11-17 08:57:56.992461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:07:20.346 [2024-11-17 08:57:57.128928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.346 [2024-11-17 08:57:57.186805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.346  [2024-11-17T08:57:57.535Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.605 00:07:20.605 08:57:57 -- dd/posix.sh@93 -- # [[ ddurbtq11eobe0k953ce6xssny86hqyfw8ds0diihkbvo5lf61yv3o7bsvbqaibayz6lrtmhgbf2d8s3m3gvonb95u0l8q70ood8luqd240ow5vyqd9jmnr1tqe6bemyqsd21te63ofwdijbre0wpvyh7duai7akzpob9vmihsvz8h1abeobfg39rgy1t0jdslvrhbr9kf9pn7cr9j2lch9i59br0cc5b61ed8lcdnr1dtko3773o4msnx9y0oq5b4my7b5ennw3loztqfq3ngqode4649oojjcgji0uktic83alj05sk0xryz6oxzb9i3o2dmv192bgmconlldcbhbrgpf069vcegkzzvxywwu6e4x4sxnxj96wei9e7p0u8jdz0ai0oqb4r84xzvmdvh973378coix6j8asuh9gufr5zsgt7bzdr083dk14wvrpoxu77j9kokoq3wm0x5l12rrjg5bxjqm4blia35vnuk0eg5ro18op9s1tvbzb8zr == \d\d\u\r\b\t\q\1\1\e\o\b\e\0\k\9\5\3\c\e\6\x\s\s\n\y\8\6\h\q\y\f\w\8\d\s\0\d\i\i\h\k\b\v\o\5\l\f\6\1\y\v\3\o\7\b\s\v\b\q\a\i\b\a\y\z\6\l\r\t\m\h\g\b\f\2\d\8\s\3\m\3\g\v\o\n\b\9\5\u\0\l\8\q\7\0\o\o\d\8\l\u\q\d\2\4\0\o\w\5\v\y\q\d\9\j\m\n\r\1\t\q\e\6\b\e\m\y\q\s\d\2\1\t\e\6\3\o\f\w\d\i\j\b\r\e\0\w\p\v\y\h\7\d\u\a\i\7\a\k\z\p\o\b\9\v\m\i\h\s\v\z\8\h\1\a\b\e\o\b\f\g\3\9\r\g\y\1\t\0\j\d\s\l\v\r\h\b\r\9\k\f\9\p\n\7\c\r\9\j\2\l\c\h\9\i\5\9\b\r\0\c\c\5\b\6\1\e\d\8\l\c\d\n\r\1\d\t\k\o\3\7\7\3\o\4\m\s\n\x\9\y\0\o\q\5\b\4\m\y\7\b\5\e\n\n\w\3\l\o\z\t\q\f\q\3\n\g\q\o\d\e\4\6\4\9\o\o\j\j\c\g\j\i\0\u\k\t\i\c\8\3\a\l\j\0\5\s\k\0\x\r\y\z\6\o\x\z\b\9\i\3\o\2\d\m\v\1\9\2\b\g\m\c\o\n\l\l\d\c\b\h\b\r\g\p\f\0\6\9\v\c\e\g\k\z\z\v\x\y\w\w\u\6\e\4\x\4\s\x\n\x\j\9\6\w\e\i\9\e\7\p\0\u\8\j\d\z\0\a\i\0\o\q\b\4\r\8\4\x\z\v\m\d\v\h\9\7\3\3\7\8\c\o\i\x\6\j\8\a\s\u\h\9\g\u\f\r\5\z\s\g\t\7\b\z\d\r\0\8\3\d\k\1\4\w\v\r\p\o\x\u\7\7\j\9\k\o\k\o\q\3\w\m\0\x\5\l\1\2\r\r\j\g\5\b\x\j\q\m\4\b\l\i\a\3\5\v\n\u\k\0\e\g\5\r\o\1\8\o\p\9\s\1\t\v\b\z\b\8\z\r ]] 00:07:20.605 08:57:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.605 08:57:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:20.605 [2024-11-17 08:57:57.460575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.605 [2024-11-17 08:57:57.460680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58540 ] 00:07:20.864 [2024-11-17 08:57:57.598517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.864 [2024-11-17 08:57:57.656885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.864  [2024-11-17T08:57:58.053Z] Copying: 512/512 [B] (average 250 kBps) 00:07:21.123 00:07:21.123 08:57:57 -- dd/posix.sh@93 -- # [[ ddurbtq11eobe0k953ce6xssny86hqyfw8ds0diihkbvo5lf61yv3o7bsvbqaibayz6lrtmhgbf2d8s3m3gvonb95u0l8q70ood8luqd240ow5vyqd9jmnr1tqe6bemyqsd21te63ofwdijbre0wpvyh7duai7akzpob9vmihsvz8h1abeobfg39rgy1t0jdslvrhbr9kf9pn7cr9j2lch9i59br0cc5b61ed8lcdnr1dtko3773o4msnx9y0oq5b4my7b5ennw3loztqfq3ngqode4649oojjcgji0uktic83alj05sk0xryz6oxzb9i3o2dmv192bgmconlldcbhbrgpf069vcegkzzvxywwu6e4x4sxnxj96wei9e7p0u8jdz0ai0oqb4r84xzvmdvh973378coix6j8asuh9gufr5zsgt7bzdr083dk14wvrpoxu77j9kokoq3wm0x5l12rrjg5bxjqm4blia35vnuk0eg5ro18op9s1tvbzb8zr == \d\d\u\r\b\t\q\1\1\e\o\b\e\0\k\9\5\3\c\e\6\x\s\s\n\y\8\6\h\q\y\f\w\8\d\s\0\d\i\i\h\k\b\v\o\5\l\f\6\1\y\v\3\o\7\b\s\v\b\q\a\i\b\a\y\z\6\l\r\t\m\h\g\b\f\2\d\8\s\3\m\3\g\v\o\n\b\9\5\u\0\l\8\q\7\0\o\o\d\8\l\u\q\d\2\4\0\o\w\5\v\y\q\d\9\j\m\n\r\1\t\q\e\6\b\e\m\y\q\s\d\2\1\t\e\6\3\o\f\w\d\i\j\b\r\e\0\w\p\v\y\h\7\d\u\a\i\7\a\k\z\p\o\b\9\v\m\i\h\s\v\z\8\h\1\a\b\e\o\b\f\g\3\9\r\g\y\1\t\0\j\d\s\l\v\r\h\b\r\9\k\f\9\p\n\7\c\r\9\j\2\l\c\h\9\i\5\9\b\r\0\c\c\5\b\6\1\e\d\8\l\c\d\n\r\1\d\t\k\o\3\7\7\3\o\4\m\s\n\x\9\y\0\o\q\5\b\4\m\y\7\b\5\e\n\n\w\3\l\o\z\t\q\f\q\3\n\g\q\o\d\e\4\6\4\9\o\o\j\j\c\g\j\i\0\u\k\t\i\c\8\3\a\l\j\0\5\s\k\0\x\r\y\z\6\o\x\z\b\9\i\3\o\2\d\m\v\1\9\2\b\g\m\c\o\n\l\l\d\c\b\h\b\r\g\p\f\0\6\9\v\c\e\g\k\z\z\v\x\y\w\w\u\6\e\4\x\4\s\x\n\x\j\9\6\w\e\i\9\e\7\p\0\u\8\j\d\z\0\a\i\0\o\q\b\4\r\8\4\x\z\v\m\d\v\h\9\7\3\3\7\8\c\o\i\x\6\j\8\a\s\u\h\9\g\u\f\r\5\z\s\g\t\7\b\z\d\r\0\8\3\d\k\1\4\w\v\r\p\o\x\u\7\7\j\9\k\o\k\o\q\3\w\m\0\x\5\l\1\2\r\r\j\g\5\b\x\j\q\m\4\b\l\i\a\3\5\v\n\u\k\0\e\g\5\r\o\1\8\o\p\9\s\1\t\v\b\z\b\8\z\r ]] 00:07:21.123 08:57:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.123 08:57:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:21.123 [2024-11-17 08:57:57.928947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.123 [2024-11-17 08:57:57.929044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58553 ] 00:07:21.394 [2024-11-17 08:57:58.065739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.394 [2024-11-17 08:57:58.113235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.394  [2024-11-17T08:57:58.609Z] Copying: 512/512 [B] (average 250 kBps) 00:07:21.679 00:07:21.679 08:57:58 -- dd/posix.sh@93 -- # [[ ddurbtq11eobe0k953ce6xssny86hqyfw8ds0diihkbvo5lf61yv3o7bsvbqaibayz6lrtmhgbf2d8s3m3gvonb95u0l8q70ood8luqd240ow5vyqd9jmnr1tqe6bemyqsd21te63ofwdijbre0wpvyh7duai7akzpob9vmihsvz8h1abeobfg39rgy1t0jdslvrhbr9kf9pn7cr9j2lch9i59br0cc5b61ed8lcdnr1dtko3773o4msnx9y0oq5b4my7b5ennw3loztqfq3ngqode4649oojjcgji0uktic83alj05sk0xryz6oxzb9i3o2dmv192bgmconlldcbhbrgpf069vcegkzzvxywwu6e4x4sxnxj96wei9e7p0u8jdz0ai0oqb4r84xzvmdvh973378coix6j8asuh9gufr5zsgt7bzdr083dk14wvrpoxu77j9kokoq3wm0x5l12rrjg5bxjqm4blia35vnuk0eg5ro18op9s1tvbzb8zr == \d\d\u\r\b\t\q\1\1\e\o\b\e\0\k\9\5\3\c\e\6\x\s\s\n\y\8\6\h\q\y\f\w\8\d\s\0\d\i\i\h\k\b\v\o\5\l\f\6\1\y\v\3\o\7\b\s\v\b\q\a\i\b\a\y\z\6\l\r\t\m\h\g\b\f\2\d\8\s\3\m\3\g\v\o\n\b\9\5\u\0\l\8\q\7\0\o\o\d\8\l\u\q\d\2\4\0\o\w\5\v\y\q\d\9\j\m\n\r\1\t\q\e\6\b\e\m\y\q\s\d\2\1\t\e\6\3\o\f\w\d\i\j\b\r\e\0\w\p\v\y\h\7\d\u\a\i\7\a\k\z\p\o\b\9\v\m\i\h\s\v\z\8\h\1\a\b\e\o\b\f\g\3\9\r\g\y\1\t\0\j\d\s\l\v\r\h\b\r\9\k\f\9\p\n\7\c\r\9\j\2\l\c\h\9\i\5\9\b\r\0\c\c\5\b\6\1\e\d\8\l\c\d\n\r\1\d\t\k\o\3\7\7\3\o\4\m\s\n\x\9\y\0\o\q\5\b\4\m\y\7\b\5\e\n\n\w\3\l\o\z\t\q\f\q\3\n\g\q\o\d\e\4\6\4\9\o\o\j\j\c\g\j\i\0\u\k\t\i\c\8\3\a\l\j\0\5\s\k\0\x\r\y\z\6\o\x\z\b\9\i\3\o\2\d\m\v\1\9\2\b\g\m\c\o\n\l\l\d\c\b\h\b\r\g\p\f\0\6\9\v\c\e\g\k\z\z\v\x\y\w\w\u\6\e\4\x\4\s\x\n\x\j\9\6\w\e\i\9\e\7\p\0\u\8\j\d\z\0\a\i\0\o\q\b\4\r\8\4\x\z\v\m\d\v\h\9\7\3\3\7\8\c\o\i\x\6\j\8\a\s\u\h\9\g\u\f\r\5\z\s\g\t\7\b\z\d\r\0\8\3\d\k\1\4\w\v\r\p\o\x\u\7\7\j\9\k\o\k\o\q\3\w\m\0\x\5\l\1\2\r\r\j\g\5\b\x\j\q\m\4\b\l\i\a\3\5\v\n\u\k\0\e\g\5\r\o\1\8\o\p\9\s\1\t\v\b\z\b\8\z\r ]] 00:07:21.679 08:57:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:21.679 08:57:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:21.679 08:57:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:21.679 08:57:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.679 08:57:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.679 08:57:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:21.679 [2024-11-17 08:57:58.387374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.679 [2024-11-17 08:57:58.387464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58555 ] 00:07:21.679 [2024-11-17 08:57:58.523597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.679 [2024-11-17 08:57:58.580000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.946  [2024-11-17T08:57:58.876Z] Copying: 512/512 [B] (average 500 kBps) 00:07:21.946 00:07:21.946 08:57:58 -- dd/posix.sh@93 -- # [[ pqdqmlfir0zj8qavfmajdjzcu66g2e3cokm0ydn66y1cggbxub4tbipt3nfvtr7rgcdk78gk2g4d9bmiwg3hu01i86ppz4xb4ujoe4z4wrg8nkjqnl1tkqszs4nfl51fu0uj51y7hxclw7n18vj8un56l8hrhl7d6q091rabsbmzd2x442itjqy3hwese56ky119xkyfv92vwp88gsxlm69ifbtxbc4ugfgs2yc5nkh655ti4omp9igxuzkk5o5xqtryk2rn1qfyy20953k8qwo2cjfhbf3rx3wdflrchwd6yaiw7vl3nsurno84qehtttkttz63ysi85b9tn8dgf1ycs77kgna0mhqpwuomf78gi7il380yatj42xd1nun5k65m4fesxvxjl1iwbiz0h2vnatzmslxxdlq00nluo67cux0ids0zkgbfuyuas3wih6yxkon5akni5q8bmhid4gnjdijs1mepnk9jc9bzjg3ty2kew4nuw0unzwy4yakk == \p\q\d\q\m\l\f\i\r\0\z\j\8\q\a\v\f\m\a\j\d\j\z\c\u\6\6\g\2\e\3\c\o\k\m\0\y\d\n\6\6\y\1\c\g\g\b\x\u\b\4\t\b\i\p\t\3\n\f\v\t\r\7\r\g\c\d\k\7\8\g\k\2\g\4\d\9\b\m\i\w\g\3\h\u\0\1\i\8\6\p\p\z\4\x\b\4\u\j\o\e\4\z\4\w\r\g\8\n\k\j\q\n\l\1\t\k\q\s\z\s\4\n\f\l\5\1\f\u\0\u\j\5\1\y\7\h\x\c\l\w\7\n\1\8\v\j\8\u\n\5\6\l\8\h\r\h\l\7\d\6\q\0\9\1\r\a\b\s\b\m\z\d\2\x\4\4\2\i\t\j\q\y\3\h\w\e\s\e\5\6\k\y\1\1\9\x\k\y\f\v\9\2\v\w\p\8\8\g\s\x\l\m\6\9\i\f\b\t\x\b\c\4\u\g\f\g\s\2\y\c\5\n\k\h\6\5\5\t\i\4\o\m\p\9\i\g\x\u\z\k\k\5\o\5\x\q\t\r\y\k\2\r\n\1\q\f\y\y\2\0\9\5\3\k\8\q\w\o\2\c\j\f\h\b\f\3\r\x\3\w\d\f\l\r\c\h\w\d\6\y\a\i\w\7\v\l\3\n\s\u\r\n\o\8\4\q\e\h\t\t\t\k\t\t\z\6\3\y\s\i\8\5\b\9\t\n\8\d\g\f\1\y\c\s\7\7\k\g\n\a\0\m\h\q\p\w\u\o\m\f\7\8\g\i\7\i\l\3\8\0\y\a\t\j\4\2\x\d\1\n\u\n\5\k\6\5\m\4\f\e\s\x\v\x\j\l\1\i\w\b\i\z\0\h\2\v\n\a\t\z\m\s\l\x\x\d\l\q\0\0\n\l\u\o\6\7\c\u\x\0\i\d\s\0\z\k\g\b\f\u\y\u\a\s\3\w\i\h\6\y\x\k\o\n\5\a\k\n\i\5\q\8\b\m\h\i\d\4\g\n\j\d\i\j\s\1\m\e\p\n\k\9\j\c\9\b\z\j\g\3\t\y\2\k\e\w\4\n\u\w\0\u\n\z\w\y\4\y\a\k\k ]] 00:07:21.946 08:57:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.946 08:57:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:21.946 [2024-11-17 08:57:58.843913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.946 [2024-11-17 08:57:58.844146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58568 ] 00:07:22.205 [2024-11-17 08:57:58.971238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.205 [2024-11-17 08:57:59.018946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.205  [2024-11-17T08:57:59.400Z] Copying: 512/512 [B] (average 500 kBps) 00:07:22.470 00:07:22.471 08:57:59 -- dd/posix.sh@93 -- # [[ pqdqmlfir0zj8qavfmajdjzcu66g2e3cokm0ydn66y1cggbxub4tbipt3nfvtr7rgcdk78gk2g4d9bmiwg3hu01i86ppz4xb4ujoe4z4wrg8nkjqnl1tkqszs4nfl51fu0uj51y7hxclw7n18vj8un56l8hrhl7d6q091rabsbmzd2x442itjqy3hwese56ky119xkyfv92vwp88gsxlm69ifbtxbc4ugfgs2yc5nkh655ti4omp9igxuzkk5o5xqtryk2rn1qfyy20953k8qwo2cjfhbf3rx3wdflrchwd6yaiw7vl3nsurno84qehtttkttz63ysi85b9tn8dgf1ycs77kgna0mhqpwuomf78gi7il380yatj42xd1nun5k65m4fesxvxjl1iwbiz0h2vnatzmslxxdlq00nluo67cux0ids0zkgbfuyuas3wih6yxkon5akni5q8bmhid4gnjdijs1mepnk9jc9bzjg3ty2kew4nuw0unzwy4yakk == \p\q\d\q\m\l\f\i\r\0\z\j\8\q\a\v\f\m\a\j\d\j\z\c\u\6\6\g\2\e\3\c\o\k\m\0\y\d\n\6\6\y\1\c\g\g\b\x\u\b\4\t\b\i\p\t\3\n\f\v\t\r\7\r\g\c\d\k\7\8\g\k\2\g\4\d\9\b\m\i\w\g\3\h\u\0\1\i\8\6\p\p\z\4\x\b\4\u\j\o\e\4\z\4\w\r\g\8\n\k\j\q\n\l\1\t\k\q\s\z\s\4\n\f\l\5\1\f\u\0\u\j\5\1\y\7\h\x\c\l\w\7\n\1\8\v\j\8\u\n\5\6\l\8\h\r\h\l\7\d\6\q\0\9\1\r\a\b\s\b\m\z\d\2\x\4\4\2\i\t\j\q\y\3\h\w\e\s\e\5\6\k\y\1\1\9\x\k\y\f\v\9\2\v\w\p\8\8\g\s\x\l\m\6\9\i\f\b\t\x\b\c\4\u\g\f\g\s\2\y\c\5\n\k\h\6\5\5\t\i\4\o\m\p\9\i\g\x\u\z\k\k\5\o\5\x\q\t\r\y\k\2\r\n\1\q\f\y\y\2\0\9\5\3\k\8\q\w\o\2\c\j\f\h\b\f\3\r\x\3\w\d\f\l\r\c\h\w\d\6\y\a\i\w\7\v\l\3\n\s\u\r\n\o\8\4\q\e\h\t\t\t\k\t\t\z\6\3\y\s\i\8\5\b\9\t\n\8\d\g\f\1\y\c\s\7\7\k\g\n\a\0\m\h\q\p\w\u\o\m\f\7\8\g\i\7\i\l\3\8\0\y\a\t\j\4\2\x\d\1\n\u\n\5\k\6\5\m\4\f\e\s\x\v\x\j\l\1\i\w\b\i\z\0\h\2\v\n\a\t\z\m\s\l\x\x\d\l\q\0\0\n\l\u\o\6\7\c\u\x\0\i\d\s\0\z\k\g\b\f\u\y\u\a\s\3\w\i\h\6\y\x\k\o\n\5\a\k\n\i\5\q\8\b\m\h\i\d\4\g\n\j\d\i\j\s\1\m\e\p\n\k\9\j\c\9\b\z\j\g\3\t\y\2\k\e\w\4\n\u\w\0\u\n\z\w\y\4\y\a\k\k ]] 00:07:22.471 08:57:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.471 08:57:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:22.471 [2024-11-17 08:57:59.280391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.471 [2024-11-17 08:57:59.280492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58570 ] 00:07:22.732 [2024-11-17 08:57:59.418347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.732 [2024-11-17 08:57:59.466678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.732  [2024-11-17T08:57:59.922Z] Copying: 512/512 [B] (average 500 kBps) 00:07:22.992 00:07:22.992 08:57:59 -- dd/posix.sh@93 -- # [[ pqdqmlfir0zj8qavfmajdjzcu66g2e3cokm0ydn66y1cggbxub4tbipt3nfvtr7rgcdk78gk2g4d9bmiwg3hu01i86ppz4xb4ujoe4z4wrg8nkjqnl1tkqszs4nfl51fu0uj51y7hxclw7n18vj8un56l8hrhl7d6q091rabsbmzd2x442itjqy3hwese56ky119xkyfv92vwp88gsxlm69ifbtxbc4ugfgs2yc5nkh655ti4omp9igxuzkk5o5xqtryk2rn1qfyy20953k8qwo2cjfhbf3rx3wdflrchwd6yaiw7vl3nsurno84qehtttkttz63ysi85b9tn8dgf1ycs77kgna0mhqpwuomf78gi7il380yatj42xd1nun5k65m4fesxvxjl1iwbiz0h2vnatzmslxxdlq00nluo67cux0ids0zkgbfuyuas3wih6yxkon5akni5q8bmhid4gnjdijs1mepnk9jc9bzjg3ty2kew4nuw0unzwy4yakk == \p\q\d\q\m\l\f\i\r\0\z\j\8\q\a\v\f\m\a\j\d\j\z\c\u\6\6\g\2\e\3\c\o\k\m\0\y\d\n\6\6\y\1\c\g\g\b\x\u\b\4\t\b\i\p\t\3\n\f\v\t\r\7\r\g\c\d\k\7\8\g\k\2\g\4\d\9\b\m\i\w\g\3\h\u\0\1\i\8\6\p\p\z\4\x\b\4\u\j\o\e\4\z\4\w\r\g\8\n\k\j\q\n\l\1\t\k\q\s\z\s\4\n\f\l\5\1\f\u\0\u\j\5\1\y\7\h\x\c\l\w\7\n\1\8\v\j\8\u\n\5\6\l\8\h\r\h\l\7\d\6\q\0\9\1\r\a\b\s\b\m\z\d\2\x\4\4\2\i\t\j\q\y\3\h\w\e\s\e\5\6\k\y\1\1\9\x\k\y\f\v\9\2\v\w\p\8\8\g\s\x\l\m\6\9\i\f\b\t\x\b\c\4\u\g\f\g\s\2\y\c\5\n\k\h\6\5\5\t\i\4\o\m\p\9\i\g\x\u\z\k\k\5\o\5\x\q\t\r\y\k\2\r\n\1\q\f\y\y\2\0\9\5\3\k\8\q\w\o\2\c\j\f\h\b\f\3\r\x\3\w\d\f\l\r\c\h\w\d\6\y\a\i\w\7\v\l\3\n\s\u\r\n\o\8\4\q\e\h\t\t\t\k\t\t\z\6\3\y\s\i\8\5\b\9\t\n\8\d\g\f\1\y\c\s\7\7\k\g\n\a\0\m\h\q\p\w\u\o\m\f\7\8\g\i\7\i\l\3\8\0\y\a\t\j\4\2\x\d\1\n\u\n\5\k\6\5\m\4\f\e\s\x\v\x\j\l\1\i\w\b\i\z\0\h\2\v\n\a\t\z\m\s\l\x\x\d\l\q\0\0\n\l\u\o\6\7\c\u\x\0\i\d\s\0\z\k\g\b\f\u\y\u\a\s\3\w\i\h\6\y\x\k\o\n\5\a\k\n\i\5\q\8\b\m\h\i\d\4\g\n\j\d\i\j\s\1\m\e\p\n\k\9\j\c\9\b\z\j\g\3\t\y\2\k\e\w\4\n\u\w\0\u\n\z\w\y\4\y\a\k\k ]] 00:07:22.992 08:57:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.992 08:57:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:22.992 [2024-11-17 08:57:59.723991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.992 [2024-11-17 08:57:59.724077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58583 ] 00:07:22.992 [2024-11-17 08:57:59.853291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.992 [2024-11-17 08:57:59.904721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.251  [2024-11-17T08:58:00.181Z] Copying: 512/512 [B] (average 500 kBps) 00:07:23.251 00:07:23.251 08:58:00 -- dd/posix.sh@93 -- # [[ pqdqmlfir0zj8qavfmajdjzcu66g2e3cokm0ydn66y1cggbxub4tbipt3nfvtr7rgcdk78gk2g4d9bmiwg3hu01i86ppz4xb4ujoe4z4wrg8nkjqnl1tkqszs4nfl51fu0uj51y7hxclw7n18vj8un56l8hrhl7d6q091rabsbmzd2x442itjqy3hwese56ky119xkyfv92vwp88gsxlm69ifbtxbc4ugfgs2yc5nkh655ti4omp9igxuzkk5o5xqtryk2rn1qfyy20953k8qwo2cjfhbf3rx3wdflrchwd6yaiw7vl3nsurno84qehtttkttz63ysi85b9tn8dgf1ycs77kgna0mhqpwuomf78gi7il380yatj42xd1nun5k65m4fesxvxjl1iwbiz0h2vnatzmslxxdlq00nluo67cux0ids0zkgbfuyuas3wih6yxkon5akni5q8bmhid4gnjdijs1mepnk9jc9bzjg3ty2kew4nuw0unzwy4yakk == \p\q\d\q\m\l\f\i\r\0\z\j\8\q\a\v\f\m\a\j\d\j\z\c\u\6\6\g\2\e\3\c\o\k\m\0\y\d\n\6\6\y\1\c\g\g\b\x\u\b\4\t\b\i\p\t\3\n\f\v\t\r\7\r\g\c\d\k\7\8\g\k\2\g\4\d\9\b\m\i\w\g\3\h\u\0\1\i\8\6\p\p\z\4\x\b\4\u\j\o\e\4\z\4\w\r\g\8\n\k\j\q\n\l\1\t\k\q\s\z\s\4\n\f\l\5\1\f\u\0\u\j\5\1\y\7\h\x\c\l\w\7\n\1\8\v\j\8\u\n\5\6\l\8\h\r\h\l\7\d\6\q\0\9\1\r\a\b\s\b\m\z\d\2\x\4\4\2\i\t\j\q\y\3\h\w\e\s\e\5\6\k\y\1\1\9\x\k\y\f\v\9\2\v\w\p\8\8\g\s\x\l\m\6\9\i\f\b\t\x\b\c\4\u\g\f\g\s\2\y\c\5\n\k\h\6\5\5\t\i\4\o\m\p\9\i\g\x\u\z\k\k\5\o\5\x\q\t\r\y\k\2\r\n\1\q\f\y\y\2\0\9\5\3\k\8\q\w\o\2\c\j\f\h\b\f\3\r\x\3\w\d\f\l\r\c\h\w\d\6\y\a\i\w\7\v\l\3\n\s\u\r\n\o\8\4\q\e\h\t\t\t\k\t\t\z\6\3\y\s\i\8\5\b\9\t\n\8\d\g\f\1\y\c\s\7\7\k\g\n\a\0\m\h\q\p\w\u\o\m\f\7\8\g\i\7\i\l\3\8\0\y\a\t\j\4\2\x\d\1\n\u\n\5\k\6\5\m\4\f\e\s\x\v\x\j\l\1\i\w\b\i\z\0\h\2\v\n\a\t\z\m\s\l\x\x\d\l\q\0\0\n\l\u\o\6\7\c\u\x\0\i\d\s\0\z\k\g\b\f\u\y\u\a\s\3\w\i\h\6\y\x\k\o\n\5\a\k\n\i\5\q\8\b\m\h\i\d\4\g\n\j\d\i\j\s\1\m\e\p\n\k\9\j\c\9\b\z\j\g\3\t\y\2\k\e\w\4\n\u\w\0\u\n\z\w\y\4\y\a\k\k ]] 00:07:23.251 00:07:23.251 real 0m3.639s 00:07:23.251 user 0m1.981s 00:07:23.251 sys 0m0.691s 00:07:23.251 08:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.252 ************************************ 00:07:23.252 END TEST dd_flags_misc_forced_aio 00:07:23.252 ************************************ 00:07:23.252 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.252 08:58:00 -- dd/posix.sh@1 -- # cleanup 00:07:23.252 08:58:00 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:23.252 08:58:00 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:23.252 ************************************ 00:07:23.252 END TEST spdk_dd_posix 00:07:23.252 ************************************ 00:07:23.252 00:07:23.252 real 0m17.797s 00:07:23.252 user 0m8.473s 00:07:23.252 sys 0m3.495s 00:07:23.252 08:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.252 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.511 08:58:00 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:23.511 08:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.511 08:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.511 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.511 ************************************ 00:07:23.511 START TEST spdk_dd_malloc 00:07:23.511 ************************************ 00:07:23.511 08:58:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:23.511 * Looking for test storage... 00:07:23.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:23.511 08:58:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.511 08:58:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.511 08:58:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.511 08:58:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.511 08:58:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.511 08:58:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.511 08:58:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.511 08:58:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.511 08:58:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.511 08:58:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.511 08:58:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.511 08:58:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.511 08:58:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.511 08:58:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.511 08:58:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.511 08:58:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.511 08:58:00 -- scripts/common.sh@344 -- # : 1 00:07:23.511 08:58:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.511 08:58:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.511 08:58:00 -- scripts/common.sh@364 -- # decimal 1 00:07:23.511 08:58:00 -- scripts/common.sh@352 -- # local d=1 00:07:23.511 08:58:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.511 08:58:00 -- scripts/common.sh@354 -- # echo 1 00:07:23.511 08:58:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.511 08:58:00 -- scripts/common.sh@365 -- # decimal 2 00:07:23.511 08:58:00 -- scripts/common.sh@352 -- # local d=2 00:07:23.511 08:58:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.511 08:58:00 -- scripts/common.sh@354 -- # echo 2 00:07:23.511 08:58:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.511 08:58:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.511 08:58:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.511 08:58:00 -- scripts/common.sh@367 -- # return 0 00:07:23.511 08:58:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.511 08:58:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.511 --rc genhtml_branch_coverage=1 00:07:23.511 --rc genhtml_function_coverage=1 00:07:23.511 --rc genhtml_legend=1 00:07:23.511 --rc geninfo_all_blocks=1 00:07:23.511 --rc geninfo_unexecuted_blocks=1 00:07:23.511 00:07:23.511 ' 00:07:23.511 08:58:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.512 --rc genhtml_branch_coverage=1 00:07:23.512 --rc genhtml_function_coverage=1 00:07:23.512 --rc genhtml_legend=1 00:07:23.512 --rc geninfo_all_blocks=1 00:07:23.512 --rc geninfo_unexecuted_blocks=1 00:07:23.512 00:07:23.512 ' 00:07:23.512 08:58:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.512 --rc genhtml_branch_coverage=1 00:07:23.512 --rc genhtml_function_coverage=1 00:07:23.512 --rc genhtml_legend=1 00:07:23.512 --rc geninfo_all_blocks=1 00:07:23.512 --rc geninfo_unexecuted_blocks=1 00:07:23.512 00:07:23.512 ' 00:07:23.512 08:58:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.512 --rc genhtml_branch_coverage=1 00:07:23.512 --rc genhtml_function_coverage=1 00:07:23.512 --rc genhtml_legend=1 00:07:23.512 --rc geninfo_all_blocks=1 00:07:23.512 --rc geninfo_unexecuted_blocks=1 00:07:23.512 00:07:23.512 ' 00:07:23.512 08:58:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.512 08:58:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.512 08:58:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.512 08:58:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.512 08:58:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.512 08:58:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.512 08:58:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.512 08:58:00 -- paths/export.sh@5 -- # export PATH 00:07:23.512 08:58:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.512 08:58:00 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:23.512 08:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.512 08:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.512 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.512 ************************************ 00:07:23.512 START TEST dd_malloc_copy 00:07:23.512 ************************************ 00:07:23.512 08:58:00 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:23.512 08:58:00 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:23.512 08:58:00 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:23.512 08:58:00 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:23.512 08:58:00 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:23.512 08:58:00 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:23.512 08:58:00 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:23.512 08:58:00 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:23.512 08:58:00 -- dd/malloc.sh@28 -- # gen_conf 00:07:23.512 08:58:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.512 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:23.771 [2024-11-17 08:58:00.472453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.771 [2024-11-17 08:58:00.472750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:07:23.771 { 00:07:23.771 "subsystems": [ 00:07:23.771 { 00:07:23.771 "subsystem": "bdev", 00:07:23.771 "config": [ 00:07:23.771 { 00:07:23.771 "params": { 00:07:23.771 "block_size": 512, 00:07:23.771 "num_blocks": 1048576, 00:07:23.772 "name": "malloc0" 00:07:23.772 }, 00:07:23.772 "method": "bdev_malloc_create" 00:07:23.772 }, 00:07:23.772 { 00:07:23.772 "params": { 00:07:23.772 "block_size": 512, 00:07:23.772 "num_blocks": 1048576, 00:07:23.772 "name": "malloc1" 00:07:23.772 }, 00:07:23.772 "method": "bdev_malloc_create" 00:07:23.772 }, 00:07:23.772 { 00:07:23.772 "method": "bdev_wait_for_examine" 00:07:23.772 } 00:07:23.772 ] 00:07:23.772 } 00:07:23.772 ] 00:07:23.772 } 00:07:23.772 [2024-11-17 08:58:00.610872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.772 [2024-11-17 08:58:00.662578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.150  [2024-11-17T08:58:03.015Z] Copying: 246/512 [MB] (246 MBps) [2024-11-17T08:58:03.015Z] Copying: 490/512 [MB] (244 MBps) [2024-11-17T08:58:03.584Z] Copying: 512/512 [MB] (average 244 MBps) 00:07:26.654 00:07:26.654 08:58:03 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:26.654 08:58:03 -- dd/malloc.sh@33 -- # gen_conf 00:07:26.654 08:58:03 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.654 08:58:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.654 [2024-11-17 08:58:03.349146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.654 [2024-11-17 08:58:03.349228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ] 00:07:26.654 { 00:07:26.654 "subsystems": [ 00:07:26.654 { 00:07:26.654 "subsystem": "bdev", 00:07:26.654 "config": [ 00:07:26.654 { 00:07:26.654 "params": { 00:07:26.654 "block_size": 512, 00:07:26.654 "num_blocks": 1048576, 00:07:26.654 "name": "malloc0" 00:07:26.654 }, 00:07:26.654 "method": "bdev_malloc_create" 00:07:26.654 }, 00:07:26.654 { 00:07:26.654 "params": { 00:07:26.654 "block_size": 512, 00:07:26.654 "num_blocks": 1048576, 00:07:26.654 "name": "malloc1" 00:07:26.654 }, 00:07:26.654 "method": "bdev_malloc_create" 00:07:26.654 }, 00:07:26.654 { 00:07:26.654 "method": "bdev_wait_for_examine" 00:07:26.654 } 00:07:26.654 ] 00:07:26.654 } 00:07:26.654 ] 00:07:26.654 } 00:07:26.654 [2024-11-17 08:58:03.476179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.654 [2024-11-17 08:58:03.528473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.032  [2024-11-17T08:58:05.900Z] Copying: 240/512 [MB] (240 MBps) [2024-11-17T08:58:05.900Z] Copying: 480/512 [MB] (240 MBps) [2024-11-17T08:58:06.468Z] Copying: 512/512 [MB] (average 240 MBps) 00:07:29.538 00:07:29.538 ************************************ 00:07:29.538 END TEST dd_malloc_copy 00:07:29.538 ************************************ 00:07:29.538 00:07:29.538 real 0m5.808s 00:07:29.538 user 0m5.193s 00:07:29.538 sys 0m0.458s 00:07:29.538 08:58:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.538 08:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.538 ************************************ 00:07:29.538 END TEST spdk_dd_malloc 00:07:29.538 ************************************ 00:07:29.538 00:07:29.538 real 0m6.046s 00:07:29.538 user 0m5.328s 00:07:29.538 sys 0m0.564s 00:07:29.538 08:58:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.538 08:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.538 08:58:06 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:29.538 08:58:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:29.538 08:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.538 08:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.538 ************************************ 00:07:29.538 START TEST spdk_dd_bdev_to_bdev 00:07:29.538 ************************************ 00:07:29.538 08:58:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:29.538 * Looking for test storage... 00:07:29.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.538 08:58:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:29.538 08:58:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:29.538 08:58:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.798 08:58:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.798 08:58:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.798 08:58:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.798 08:58:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.798 08:58:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.798 08:58:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.798 08:58:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.798 08:58:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.798 08:58:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.798 08:58:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.798 08:58:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.798 08:58:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.798 08:58:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.798 08:58:06 -- scripts/common.sh@344 -- # : 1 00:07:29.798 08:58:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.798 08:58:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.798 08:58:06 -- scripts/common.sh@364 -- # decimal 1 00:07:29.798 08:58:06 -- scripts/common.sh@352 -- # local d=1 00:07:29.798 08:58:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.798 08:58:06 -- scripts/common.sh@354 -- # echo 1 00:07:29.798 08:58:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.798 08:58:06 -- scripts/common.sh@365 -- # decimal 2 00:07:29.798 08:58:06 -- scripts/common.sh@352 -- # local d=2 00:07:29.798 08:58:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.798 08:58:06 -- scripts/common.sh@354 -- # echo 2 00:07:29.798 08:58:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.798 08:58:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.798 08:58:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.798 08:58:06 -- scripts/common.sh@367 -- # return 0 00:07:29.798 08:58:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.798 08:58:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.798 --rc genhtml_branch_coverage=1 00:07:29.798 --rc genhtml_function_coverage=1 00:07:29.798 --rc genhtml_legend=1 00:07:29.798 --rc geninfo_all_blocks=1 00:07:29.798 --rc geninfo_unexecuted_blocks=1 00:07:29.798 00:07:29.798 ' 00:07:29.798 08:58:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.798 --rc genhtml_branch_coverage=1 00:07:29.798 --rc genhtml_function_coverage=1 00:07:29.798 --rc genhtml_legend=1 00:07:29.798 --rc geninfo_all_blocks=1 00:07:29.798 --rc geninfo_unexecuted_blocks=1 00:07:29.798 00:07:29.798 ' 00:07:29.798 08:58:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.798 --rc genhtml_branch_coverage=1 00:07:29.798 --rc genhtml_function_coverage=1 00:07:29.798 --rc genhtml_legend=1 00:07:29.798 --rc geninfo_all_blocks=1 00:07:29.798 --rc geninfo_unexecuted_blocks=1 00:07:29.798 00:07:29.798 ' 00:07:29.798 08:58:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.798 --rc genhtml_branch_coverage=1 00:07:29.798 --rc genhtml_function_coverage=1 00:07:29.798 --rc genhtml_legend=1 00:07:29.798 --rc geninfo_all_blocks=1 00:07:29.798 --rc geninfo_unexecuted_blocks=1 00:07:29.798 00:07:29.798 ' 00:07:29.798 08:58:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.798 08:58:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.798 08:58:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.798 08:58:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.798 08:58:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.798 08:58:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.798 08:58:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.798 08:58:06 -- paths/export.sh@5 -- # export PATH 00:07:29.798 08:58:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.798 08:58:06 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:29.798 08:58:06 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:29.798 08:58:06 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:29.798 08:58:06 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:29.798 08:58:06 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:29.799 08:58:06 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.799 08:58:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.799 08:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.799 08:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.799 ************************************ 00:07:29.799 START TEST dd_inflate_file 00:07:29.799 ************************************ 00:07:29.799 08:58:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.799 [2024-11-17 08:58:06.568687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.799 [2024-11-17 08:58:06.568937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:07:29.799 [2024-11-17 08:58:06.705168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.058 [2024-11-17 08:58:06.752978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.058  [2024-11-17T08:58:07.247Z] Copying: 64/64 [MB] (average 2206 MBps) 00:07:30.317 00:07:30.317 00:07:30.317 real 0m0.480s 00:07:30.317 ************************************ 00:07:30.317 END TEST dd_inflate_file 00:07:30.317 ************************************ 00:07:30.317 user 0m0.248s 00:07:30.317 sys 0m0.116s 00:07:30.317 08:58:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.317 08:58:06 -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 08:58:07 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:30.317 08:58:07 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:30.317 08:58:07 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:30.317 08:58:07 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:30.317 08:58:07 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.317 08:58:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 08:58:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:30.317 08:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.317 08:58:07 -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 ************************************ 00:07:30.317 START TEST dd_copy_to_out_bdev 00:07:30.317 ************************************ 00:07:30.317 08:58:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:30.317 { 00:07:30.317 "subsystems": [ 00:07:30.317 { 00:07:30.317 "subsystem": "bdev", 00:07:30.317 "config": [ 00:07:30.317 { 00:07:30.317 "params": { 00:07:30.317 "trtype": "pcie", 00:07:30.317 "traddr": "0000:00:06.0", 00:07:30.317 "name": "Nvme0" 00:07:30.317 }, 00:07:30.317 "method": "bdev_nvme_attach_controller" 00:07:30.317 }, 00:07:30.317 { 00:07:30.317 "params": { 00:07:30.317 "trtype": "pcie", 00:07:30.317 "traddr": "0000:00:07.0", 00:07:30.317 "name": "Nvme1" 00:07:30.317 }, 00:07:30.317 "method": "bdev_nvme_attach_controller" 00:07:30.317 }, 00:07:30.317 { 00:07:30.317 "method": "bdev_wait_for_examine" 00:07:30.317 } 00:07:30.317 ] 00:07:30.317 } 00:07:30.317 ] 00:07:30.317 } 00:07:30.317 [2024-11-17 08:58:07.099100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.317 [2024-11-17 08:58:07.099344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58840 ] 00:07:30.317 [2024-11-17 08:58:07.237337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.577 [2024-11-17 08:58:07.285525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.955  [2024-11-17T08:58:08.885Z] Copying: 50/64 [MB] (50 MBps) [2024-11-17T08:58:09.143Z] Copying: 64/64 [MB] (average 50 MBps) 00:07:32.213 00:07:32.213 00:07:32.213 real 0m1.888s 00:07:32.213 user 0m1.647s 00:07:32.213 sys 0m0.166s 00:07:32.213 ************************************ 00:07:32.213 END TEST dd_copy_to_out_bdev 00:07:32.213 ************************************ 00:07:32.214 08:58:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.214 08:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:32.214 08:58:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.214 08:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.214 08:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:32.214 ************************************ 00:07:32.214 START TEST dd_offset_magic 00:07:32.214 ************************************ 00:07:32.214 08:58:08 -- common/autotest_common.sh@1114 -- # offset_magic 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:32.214 08:58:08 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:32.214 08:58:08 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.214 08:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:32.214 [2024-11-17 08:58:09.050419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.214 [2024-11-17 08:58:09.050511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58877 ] 00:07:32.214 { 00:07:32.214 "subsystems": [ 00:07:32.214 { 00:07:32.214 "subsystem": "bdev", 00:07:32.214 "config": [ 00:07:32.214 { 00:07:32.214 "params": { 00:07:32.214 "trtype": "pcie", 00:07:32.214 "traddr": "0000:00:06.0", 00:07:32.214 "name": "Nvme0" 00:07:32.214 }, 00:07:32.214 "method": "bdev_nvme_attach_controller" 00:07:32.214 }, 00:07:32.214 { 00:07:32.214 "params": { 00:07:32.214 "trtype": "pcie", 00:07:32.214 "traddr": "0000:00:07.0", 00:07:32.214 "name": "Nvme1" 00:07:32.214 }, 00:07:32.214 "method": "bdev_nvme_attach_controller" 00:07:32.214 }, 00:07:32.214 { 00:07:32.214 "method": "bdev_wait_for_examine" 00:07:32.214 } 00:07:32.214 ] 00:07:32.214 } 00:07:32.214 ] 00:07:32.214 } 00:07:32.471 [2024-11-17 08:58:09.187023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.471 [2024-11-17 08:58:09.234913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.729  [2024-11-17T08:58:09.918Z] Copying: 65/65 [MB] (average 955 MBps) 00:07:32.988 00:07:32.988 08:58:09 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:32.988 08:58:09 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:32.988 08:58:09 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.988 08:58:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.988 [2024-11-17 08:58:09.726280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.988 [2024-11-17 08:58:09.726372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:07:32.988 { 00:07:32.988 "subsystems": [ 00:07:32.988 { 00:07:32.988 "subsystem": "bdev", 00:07:32.988 "config": [ 00:07:32.988 { 00:07:32.988 "params": { 00:07:32.988 "trtype": "pcie", 00:07:32.988 "traddr": "0000:00:06.0", 00:07:32.988 "name": "Nvme0" 00:07:32.988 }, 00:07:32.988 "method": "bdev_nvme_attach_controller" 00:07:32.988 }, 00:07:32.988 { 00:07:32.988 "params": { 00:07:32.988 "trtype": "pcie", 00:07:32.988 "traddr": "0000:00:07.0", 00:07:32.988 "name": "Nvme1" 00:07:32.988 }, 00:07:32.988 "method": "bdev_nvme_attach_controller" 00:07:32.988 }, 00:07:32.988 { 00:07:32.988 "method": "bdev_wait_for_examine" 00:07:32.988 } 00:07:32.988 ] 00:07:32.988 } 00:07:32.988 ] 00:07:32.988 } 00:07:32.988 [2024-11-17 08:58:09.862168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.248 [2024-11-17 08:58:09.918484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.248  [2024-11-17T08:58:10.437Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.507 00:07:33.507 08:58:10 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:33.507 08:58:10 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:33.507 08:58:10 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:33.507 08:58:10 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:33.507 08:58:10 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:33.507 08:58:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:33.507 08:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.507 [2024-11-17 08:58:10.322773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.507 [2024-11-17 08:58:10.322864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:07:33.507 { 00:07:33.507 "subsystems": [ 00:07:33.507 { 00:07:33.507 "subsystem": "bdev", 00:07:33.507 "config": [ 00:07:33.507 { 00:07:33.507 "params": { 00:07:33.507 "trtype": "pcie", 00:07:33.507 "traddr": "0000:00:06.0", 00:07:33.507 "name": "Nvme0" 00:07:33.507 }, 00:07:33.507 "method": "bdev_nvme_attach_controller" 00:07:33.507 }, 00:07:33.507 { 00:07:33.507 "params": { 00:07:33.507 "trtype": "pcie", 00:07:33.507 "traddr": "0000:00:07.0", 00:07:33.507 "name": "Nvme1" 00:07:33.507 }, 00:07:33.507 "method": "bdev_nvme_attach_controller" 00:07:33.507 }, 00:07:33.507 { 00:07:33.507 "method": "bdev_wait_for_examine" 00:07:33.507 } 00:07:33.507 ] 00:07:33.507 } 00:07:33.507 ] 00:07:33.507 } 00:07:33.766 [2024-11-17 08:58:10.461830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.766 [2024-11-17 08:58:10.508424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.025  [2024-11-17T08:58:10.955Z] Copying: 65/65 [MB] (average 1031 MBps) 00:07:34.025 00:07:34.025 08:58:10 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:34.025 08:58:10 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:34.025 08:58:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:34.025 08:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:34.285 [2024-11-17 08:58:10.998192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.285 [2024-11-17 08:58:10.998285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58926 ] 00:07:34.285 { 00:07:34.285 "subsystems": [ 00:07:34.285 { 00:07:34.285 "subsystem": "bdev", 00:07:34.285 "config": [ 00:07:34.285 { 00:07:34.285 "params": { 00:07:34.285 "trtype": "pcie", 00:07:34.285 "traddr": "0000:00:06.0", 00:07:34.285 "name": "Nvme0" 00:07:34.285 }, 00:07:34.285 "method": "bdev_nvme_attach_controller" 00:07:34.285 }, 00:07:34.285 { 00:07:34.285 "params": { 00:07:34.285 "trtype": "pcie", 00:07:34.285 "traddr": "0000:00:07.0", 00:07:34.285 "name": "Nvme1" 00:07:34.285 }, 00:07:34.285 "method": "bdev_nvme_attach_controller" 00:07:34.285 }, 00:07:34.285 { 00:07:34.285 "method": "bdev_wait_for_examine" 00:07:34.285 } 00:07:34.285 ] 00:07:34.285 } 00:07:34.285 ] 00:07:34.285 } 00:07:34.285 [2024-11-17 08:58:11.134189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.285 [2024-11-17 08:58:11.184866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.544  [2024-11-17T08:58:11.733Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.803 00:07:34.803 ************************************ 00:07:34.803 END TEST dd_offset_magic 00:07:34.803 ************************************ 00:07:34.803 08:58:11 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:34.803 08:58:11 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:34.803 00:07:34.803 real 0m2.541s 00:07:34.803 user 0m1.937s 00:07:34.803 sys 0m0.427s 00:07:34.803 08:58:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.803 08:58:11 -- common/autotest_common.sh@10 -- # set +x 00:07:34.803 08:58:11 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:34.803 08:58:11 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:34.803 08:58:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.803 08:58:11 -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.803 08:58:11 -- dd/common.sh@12 -- # local size=4194330 00:07:34.803 08:58:11 -- dd/common.sh@14 -- # local bs=1048576 00:07:34.803 08:58:11 -- dd/common.sh@15 -- # local count=5 00:07:34.803 08:58:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:34.803 08:58:11 -- dd/common.sh@18 -- # gen_conf 00:07:34.803 08:58:11 -- dd/common.sh@31 -- # xtrace_disable 00:07:34.803 08:58:11 -- common/autotest_common.sh@10 -- # set +x 00:07:34.803 [2024-11-17 08:58:11.630121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.803 [2024-11-17 08:58:11.630213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:07:34.803 { 00:07:34.803 "subsystems": [ 00:07:34.803 { 00:07:34.803 "subsystem": "bdev", 00:07:34.803 "config": [ 00:07:34.803 { 00:07:34.803 "params": { 00:07:34.803 "trtype": "pcie", 00:07:34.803 "traddr": "0000:00:06.0", 00:07:34.803 "name": "Nvme0" 00:07:34.803 }, 00:07:34.803 "method": "bdev_nvme_attach_controller" 00:07:34.803 }, 00:07:34.803 { 00:07:34.803 "params": { 00:07:34.803 "trtype": "pcie", 00:07:34.803 "traddr": "0000:00:07.0", 00:07:34.803 "name": "Nvme1" 00:07:34.803 }, 00:07:34.803 "method": "bdev_nvme_attach_controller" 00:07:34.803 }, 00:07:34.803 { 00:07:34.803 "method": "bdev_wait_for_examine" 00:07:34.803 } 00:07:34.803 ] 00:07:34.803 } 00:07:34.803 ] 00:07:34.803 } 00:07:35.063 [2024-11-17 08:58:11.766636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.063 [2024-11-17 08:58:11.813383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.063  [2024-11-17T08:58:12.252Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:07:35.322 00:07:35.322 08:58:12 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:35.322 08:58:12 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:35.322 08:58:12 -- dd/common.sh@11 -- # local nvme_ref= 00:07:35.322 08:58:12 -- dd/common.sh@12 -- # local size=4194330 00:07:35.322 08:58:12 -- dd/common.sh@14 -- # local bs=1048576 00:07:35.322 08:58:12 -- dd/common.sh@15 -- # local count=5 00:07:35.322 08:58:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:35.322 08:58:12 -- dd/common.sh@18 -- # gen_conf 00:07:35.322 08:58:12 -- dd/common.sh@31 -- # xtrace_disable 00:07:35.322 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.322 [2024-11-17 08:58:12.203739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.322 [2024-11-17 08:58:12.204200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:07:35.322 { 00:07:35.322 "subsystems": [ 00:07:35.322 { 00:07:35.322 "subsystem": "bdev", 00:07:35.322 "config": [ 00:07:35.322 { 00:07:35.322 "params": { 00:07:35.322 "trtype": "pcie", 00:07:35.322 "traddr": "0000:00:06.0", 00:07:35.322 "name": "Nvme0" 00:07:35.322 }, 00:07:35.322 "method": "bdev_nvme_attach_controller" 00:07:35.322 }, 00:07:35.322 { 00:07:35.322 "params": { 00:07:35.322 "trtype": "pcie", 00:07:35.322 "traddr": "0000:00:07.0", 00:07:35.322 "name": "Nvme1" 00:07:35.322 }, 00:07:35.322 "method": "bdev_nvme_attach_controller" 00:07:35.322 }, 00:07:35.322 { 00:07:35.322 "method": "bdev_wait_for_examine" 00:07:35.322 } 00:07:35.322 ] 00:07:35.322 } 00:07:35.322 ] 00:07:35.322 } 00:07:35.581 [2024-11-17 08:58:12.342880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.581 [2024-11-17 08:58:12.392187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.840  [2024-11-17T08:58:12.770Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:35.840 00:07:35.840 08:58:12 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:35.840 00:07:35.840 real 0m6.448s 00:07:35.840 user 0m4.854s 00:07:35.840 sys 0m1.107s 00:07:35.840 ************************************ 00:07:35.840 END TEST spdk_dd_bdev_to_bdev 00:07:35.840 ************************************ 00:07:35.840 08:58:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.840 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:07:36.100 08:58:12 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:36.100 08:58:12 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:36.100 08:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.100 08:58:12 -- common/autotest_common.sh@10 -- # set +x 00:07:36.100 ************************************ 00:07:36.100 START TEST spdk_dd_uring 00:07:36.100 ************************************ 00:07:36.100 08:58:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:36.100 * Looking for test storage... 00:07:36.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.100 08:58:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.100 08:58:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.100 08:58:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.100 08:58:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.100 08:58:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.100 08:58:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.100 08:58:12 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.100 08:58:12 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.100 08:58:12 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.100 08:58:12 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.100 08:58:12 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.100 08:58:12 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.100 08:58:12 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.100 08:58:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.100 08:58:12 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.100 08:58:12 -- scripts/common.sh@344 -- # : 1 00:07:36.100 08:58:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.100 08:58:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.100 08:58:12 -- scripts/common.sh@364 -- # decimal 1 00:07:36.100 08:58:12 -- scripts/common.sh@352 -- # local d=1 00:07:36.100 08:58:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.100 08:58:12 -- scripts/common.sh@354 -- # echo 1 00:07:36.100 08:58:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.100 08:58:12 -- scripts/common.sh@365 -- # decimal 2 00:07:36.100 08:58:12 -- scripts/common.sh@352 -- # local d=2 00:07:36.100 08:58:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.100 08:58:12 -- scripts/common.sh@354 -- # echo 2 00:07:36.100 08:58:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.100 08:58:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.100 08:58:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.100 08:58:12 -- scripts/common.sh@367 -- # return 0 00:07:36.100 08:58:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.100 --rc genhtml_branch_coverage=1 00:07:36.100 --rc genhtml_function_coverage=1 00:07:36.100 --rc genhtml_legend=1 00:07:36.100 --rc geninfo_all_blocks=1 00:07:36.100 --rc geninfo_unexecuted_blocks=1 00:07:36.100 00:07:36.100 ' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.100 --rc genhtml_branch_coverage=1 00:07:36.100 --rc genhtml_function_coverage=1 00:07:36.100 --rc genhtml_legend=1 00:07:36.100 --rc geninfo_all_blocks=1 00:07:36.100 --rc geninfo_unexecuted_blocks=1 00:07:36.100 00:07:36.100 ' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.100 --rc genhtml_branch_coverage=1 00:07:36.100 --rc genhtml_function_coverage=1 00:07:36.100 --rc genhtml_legend=1 00:07:36.100 --rc geninfo_all_blocks=1 00:07:36.100 --rc geninfo_unexecuted_blocks=1 00:07:36.100 00:07:36.100 ' 00:07:36.100 08:58:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.100 --rc genhtml_branch_coverage=1 00:07:36.100 --rc genhtml_function_coverage=1 00:07:36.100 --rc genhtml_legend=1 00:07:36.100 --rc geninfo_all_blocks=1 00:07:36.100 --rc geninfo_unexecuted_blocks=1 00:07:36.100 00:07:36.100 ' 00:07:36.100 08:58:12 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.100 08:58:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.100 08:58:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.100 08:58:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.100 08:58:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.100 08:58:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.100 08:58:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.100 08:58:12 -- paths/export.sh@5 -- # export PATH 00:07:36.100 08:58:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.100 08:58:13 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:36.100 08:58:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.100 08:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.100 08:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.100 ************************************ 00:07:36.100 START TEST dd_uring_copy 00:07:36.100 ************************************ 00:07:36.100 08:58:13 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:07:36.100 08:58:13 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:36.100 08:58:13 -- dd/uring.sh@16 -- # local magic 00:07:36.100 08:58:13 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:36.100 08:58:13 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:36.100 08:58:13 -- dd/uring.sh@19 -- # local verify_magic 00:07:36.100 08:58:13 -- dd/uring.sh@21 -- # init_zram 00:07:36.100 08:58:13 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:36.100 08:58:13 -- dd/common.sh@164 -- # return 00:07:36.100 08:58:13 -- dd/uring.sh@22 -- # create_zram_dev 00:07:36.100 08:58:13 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:36.100 08:58:13 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:36.100 08:58:13 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:36.100 08:58:13 -- dd/common.sh@181 -- # local id=1 00:07:36.100 08:58:13 -- dd/common.sh@182 -- # local size=512M 00:07:36.100 08:58:13 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:36.100 08:58:13 -- dd/common.sh@186 -- # echo 512M 00:07:36.100 08:58:13 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:36.100 08:58:13 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:36.100 08:58:13 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:36.101 08:58:13 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:36.101 08:58:13 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:36.101 08:58:13 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:36.101 08:58:13 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:36.101 08:58:13 -- dd/common.sh@98 -- # xtrace_disable 00:07:36.101 08:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.360 08:58:13 -- dd/uring.sh@41 -- # magic=boagdraiqczclajv5a9prm85dhm7pysuf3bffh1mcsarpqresay4bxfq7wgj74vlononxyelatyorjujkj69pzfse43f9d7x1bfu4tls1y8vf6ymeu8setkoukyg00tp7bmkb2zr42hovk9pyxdslslxcc4oq22p9zdlnqxde28wpk1xjg02zkhzklekff4umfc5a9qm7d0kh4drf2f90uczvhgpm4of18ueljvkbn8wxmy8kl223oc220ni0thg2v4pdb5gmz51mzcbghkzpwcl1uw2038n0rgb9dyzblpsdurna9c7pm65klg241dvoy9dga70bzqryr4wcdkv4sduekujklkjj2gb61c7iasluu7hitpctkpvsdjolf68kpngb1qj1ugzj4xsfnrt9pm74ade6fcpuzftho164qtegg9heymoalrblhy2hlk7qs62k8fwqm57amp5jcqdcbx7sc71ejfs1rj1n9mve023wosj4p9jrcf0oam84bz2pmfgvp24iktnagy7tpbjulsciq0727qhdxt5ju3e9tbvycvyyedywbn0exscheo0kaa921qg5blf6zz7o6t92i149fof3tgqedjw7fi95541b6gwb8nsnb6lxlx1ltxrfbehkgeazc6i3r49oq2slvrrc83hlz8aekzqk64dkvszgk2p8fc4givdpa9gicft2u7jwt5gxug90l5bgbtbl8fvjjf8o1iocizq16ubq1kumj9fwdn3yfgjpd28yt78po7m9wl3ix5v33q147134y70h9chzkao0hgvvtbntrbroeb012enzb8aqlzxuzpkraag1hdjeynp311t1xqkas55nloukjd8lytsjfkdpu7nih9k39mwnn13yfqawzo4auwgsaefc75s12v0iphfzlgss3p31838tjb933nzdbw7bipqy57q6ueml7z343fiy42wgosk4uo8pu2xpzbe43x7whtkwnv3bos2kqq0vt68rvp8497zalkwn2mkwpk0 00:07:36.360 08:58:13 -- dd/uring.sh@42 -- # echo boagdraiqczclajv5a9prm85dhm7pysuf3bffh1mcsarpqresay4bxfq7wgj74vlononxyelatyorjujkj69pzfse43f9d7x1bfu4tls1y8vf6ymeu8setkoukyg00tp7bmkb2zr42hovk9pyxdslslxcc4oq22p9zdlnqxde28wpk1xjg02zkhzklekff4umfc5a9qm7d0kh4drf2f90uczvhgpm4of18ueljvkbn8wxmy8kl223oc220ni0thg2v4pdb5gmz51mzcbghkzpwcl1uw2038n0rgb9dyzblpsdurna9c7pm65klg241dvoy9dga70bzqryr4wcdkv4sduekujklkjj2gb61c7iasluu7hitpctkpvsdjolf68kpngb1qj1ugzj4xsfnrt9pm74ade6fcpuzftho164qtegg9heymoalrblhy2hlk7qs62k8fwqm57amp5jcqdcbx7sc71ejfs1rj1n9mve023wosj4p9jrcf0oam84bz2pmfgvp24iktnagy7tpbjulsciq0727qhdxt5ju3e9tbvycvyyedywbn0exscheo0kaa921qg5blf6zz7o6t92i149fof3tgqedjw7fi95541b6gwb8nsnb6lxlx1ltxrfbehkgeazc6i3r49oq2slvrrc83hlz8aekzqk64dkvszgk2p8fc4givdpa9gicft2u7jwt5gxug90l5bgbtbl8fvjjf8o1iocizq16ubq1kumj9fwdn3yfgjpd28yt78po7m9wl3ix5v33q147134y70h9chzkao0hgvvtbntrbroeb012enzb8aqlzxuzpkraag1hdjeynp311t1xqkas55nloukjd8lytsjfkdpu7nih9k39mwnn13yfqawzo4auwgsaefc75s12v0iphfzlgss3p31838tjb933nzdbw7bipqy57q6ueml7z343fiy42wgosk4uo8pu2xpzbe43x7whtkwnv3bos2kqq0vt68rvp8497zalkwn2mkwpk0 00:07:36.360 08:58:13 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:36.360 [2024-11-17 08:58:13.087951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.360 [2024-11-17 08:58:13.088056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 00:07:36.360 [2024-11-17 08:58:13.220617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.620 [2024-11-17 08:58:13.287921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.879  [2024-11-17T08:58:14.068Z] Copying: 511/511 [MB] (average 1777 MBps) 00:07:37.138 00:07:37.138 08:58:14 -- dd/uring.sh@54 -- # gen_conf 00:07:37.138 08:58:14 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:37.138 08:58:14 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.138 08:58:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.138 [2024-11-17 08:58:14.056894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.139 [2024-11-17 08:58:14.056997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:07:37.397 { 00:07:37.397 "subsystems": [ 00:07:37.397 { 00:07:37.397 "subsystem": "bdev", 00:07:37.397 "config": [ 00:07:37.397 { 00:07:37.397 "params": { 00:07:37.397 "block_size": 512, 00:07:37.397 "num_blocks": 1048576, 00:07:37.397 "name": "malloc0" 00:07:37.397 }, 00:07:37.397 "method": "bdev_malloc_create" 00:07:37.397 }, 00:07:37.397 { 00:07:37.397 "params": { 00:07:37.397 "filename": "/dev/zram1", 00:07:37.397 "name": "uring0" 00:07:37.397 }, 00:07:37.397 "method": "bdev_uring_create" 00:07:37.397 }, 00:07:37.397 { 00:07:37.397 "method": "bdev_wait_for_examine" 00:07:37.397 } 00:07:37.397 ] 00:07:37.397 } 00:07:37.397 ] 00:07:37.397 } 00:07:37.397 [2024-11-17 08:58:14.196088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.397 [2024-11-17 08:58:14.246414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.774  [2024-11-17T08:58:16.641Z] Copying: 205/512 [MB] (205 MBps) [2024-11-17T08:58:16.901Z] Copying: 413/512 [MB] (208 MBps) [2024-11-17T08:58:17.469Z] Copying: 512/512 [MB] (average 206 MBps) 00:07:40.539 00:07:40.539 08:58:17 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:40.539 08:58:17 -- dd/uring.sh@60 -- # gen_conf 00:07:40.539 08:58:17 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.539 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.539 [2024-11-17 08:58:17.221716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.539 [2024-11-17 08:58:17.221837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59103 ] 00:07:40.539 { 00:07:40.539 "subsystems": [ 00:07:40.539 { 00:07:40.539 "subsystem": "bdev", 00:07:40.539 "config": [ 00:07:40.539 { 00:07:40.539 "params": { 00:07:40.539 "block_size": 512, 00:07:40.539 "num_blocks": 1048576, 00:07:40.539 "name": "malloc0" 00:07:40.539 }, 00:07:40.539 "method": "bdev_malloc_create" 00:07:40.539 }, 00:07:40.539 { 00:07:40.539 "params": { 00:07:40.539 "filename": "/dev/zram1", 00:07:40.539 "name": "uring0" 00:07:40.539 }, 00:07:40.539 "method": "bdev_uring_create" 00:07:40.539 }, 00:07:40.539 { 00:07:40.539 "method": "bdev_wait_for_examine" 00:07:40.539 } 00:07:40.539 ] 00:07:40.539 } 00:07:40.539 ] 00:07:40.539 } 00:07:40.539 [2024-11-17 08:58:17.357687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.539 [2024-11-17 08:58:17.406239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.918  [2024-11-17T08:58:19.783Z] Copying: 124/512 [MB] (124 MBps) [2024-11-17T08:58:20.764Z] Copying: 237/512 [MB] (112 MBps) [2024-11-17T08:58:21.708Z] Copying: 373/512 [MB] (135 MBps) [2024-11-17T08:58:21.708Z] Copying: 494/512 [MB] (121 MBps) [2024-11-17T08:58:21.967Z] Copying: 512/512 [MB] (average 123 MBps) 00:07:45.037 00:07:45.296 08:58:21 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:45.296 08:58:21 -- dd/uring.sh@66 -- # [[ boagdraiqczclajv5a9prm85dhm7pysuf3bffh1mcsarpqresay4bxfq7wgj74vlononxyelatyorjujkj69pzfse43f9d7x1bfu4tls1y8vf6ymeu8setkoukyg00tp7bmkb2zr42hovk9pyxdslslxcc4oq22p9zdlnqxde28wpk1xjg02zkhzklekff4umfc5a9qm7d0kh4drf2f90uczvhgpm4of18ueljvkbn8wxmy8kl223oc220ni0thg2v4pdb5gmz51mzcbghkzpwcl1uw2038n0rgb9dyzblpsdurna9c7pm65klg241dvoy9dga70bzqryr4wcdkv4sduekujklkjj2gb61c7iasluu7hitpctkpvsdjolf68kpngb1qj1ugzj4xsfnrt9pm74ade6fcpuzftho164qtegg9heymoalrblhy2hlk7qs62k8fwqm57amp5jcqdcbx7sc71ejfs1rj1n9mve023wosj4p9jrcf0oam84bz2pmfgvp24iktnagy7tpbjulsciq0727qhdxt5ju3e9tbvycvyyedywbn0exscheo0kaa921qg5blf6zz7o6t92i149fof3tgqedjw7fi95541b6gwb8nsnb6lxlx1ltxrfbehkgeazc6i3r49oq2slvrrc83hlz8aekzqk64dkvszgk2p8fc4givdpa9gicft2u7jwt5gxug90l5bgbtbl8fvjjf8o1iocizq16ubq1kumj9fwdn3yfgjpd28yt78po7m9wl3ix5v33q147134y70h9chzkao0hgvvtbntrbroeb012enzb8aqlzxuzpkraag1hdjeynp311t1xqkas55nloukjd8lytsjfkdpu7nih9k39mwnn13yfqawzo4auwgsaefc75s12v0iphfzlgss3p31838tjb933nzdbw7bipqy57q6ueml7z343fiy42wgosk4uo8pu2xpzbe43x7whtkwnv3bos2kqq0vt68rvp8497zalkwn2mkwpk0 == \b\o\a\g\d\r\a\i\q\c\z\c\l\a\j\v\5\a\9\p\r\m\8\5\d\h\m\7\p\y\s\u\f\3\b\f\f\h\1\m\c\s\a\r\p\q\r\e\s\a\y\4\b\x\f\q\7\w\g\j\7\4\v\l\o\n\o\n\x\y\e\l\a\t\y\o\r\j\u\j\k\j\6\9\p\z\f\s\e\4\3\f\9\d\7\x\1\b\f\u\4\t\l\s\1\y\8\v\f\6\y\m\e\u\8\s\e\t\k\o\u\k\y\g\0\0\t\p\7\b\m\k\b\2\z\r\4\2\h\o\v\k\9\p\y\x\d\s\l\s\l\x\c\c\4\o\q\2\2\p\9\z\d\l\n\q\x\d\e\2\8\w\p\k\1\x\j\g\0\2\z\k\h\z\k\l\e\k\f\f\4\u\m\f\c\5\a\9\q\m\7\d\0\k\h\4\d\r\f\2\f\9\0\u\c\z\v\h\g\p\m\4\o\f\1\8\u\e\l\j\v\k\b\n\8\w\x\m\y\8\k\l\2\2\3\o\c\2\2\0\n\i\0\t\h\g\2\v\4\p\d\b\5\g\m\z\5\1\m\z\c\b\g\h\k\z\p\w\c\l\1\u\w\2\0\3\8\n\0\r\g\b\9\d\y\z\b\l\p\s\d\u\r\n\a\9\c\7\p\m\6\5\k\l\g\2\4\1\d\v\o\y\9\d\g\a\7\0\b\z\q\r\y\r\4\w\c\d\k\v\4\s\d\u\e\k\u\j\k\l\k\j\j\2\g\b\6\1\c\7\i\a\s\l\u\u\7\h\i\t\p\c\t\k\p\v\s\d\j\o\l\f\6\8\k\p\n\g\b\1\q\j\1\u\g\z\j\4\x\s\f\n\r\t\9\p\m\7\4\a\d\e\6\f\c\p\u\z\f\t\h\o\1\6\4\q\t\e\g\g\9\h\e\y\m\o\a\l\r\b\l\h\y\2\h\l\k\7\q\s\6\2\k\8\f\w\q\m\5\7\a\m\p\5\j\c\q\d\c\b\x\7\s\c\7\1\e\j\f\s\1\r\j\1\n\9\m\v\e\0\2\3\w\o\s\j\4\p\9\j\r\c\f\0\o\a\m\8\4\b\z\2\p\m\f\g\v\p\2\4\i\k\t\n\a\g\y\7\t\p\b\j\u\l\s\c\i\q\0\7\2\7\q\h\d\x\t\5\j\u\3\e\9\t\b\v\y\c\v\y\y\e\d\y\w\b\n\0\e\x\s\c\h\e\o\0\k\a\a\9\2\1\q\g\5\b\l\f\6\z\z\7\o\6\t\9\2\i\1\4\9\f\o\f\3\t\g\q\e\d\j\w\7\f\i\9\5\5\4\1\b\6\g\w\b\8\n\s\n\b\6\l\x\l\x\1\l\t\x\r\f\b\e\h\k\g\e\a\z\c\6\i\3\r\4\9\o\q\2\s\l\v\r\r\c\8\3\h\l\z\8\a\e\k\z\q\k\6\4\d\k\v\s\z\g\k\2\p\8\f\c\4\g\i\v\d\p\a\9\g\i\c\f\t\2\u\7\j\w\t\5\g\x\u\g\9\0\l\5\b\g\b\t\b\l\8\f\v\j\j\f\8\o\1\i\o\c\i\z\q\1\6\u\b\q\1\k\u\m\j\9\f\w\d\n\3\y\f\g\j\p\d\2\8\y\t\7\8\p\o\7\m\9\w\l\3\i\x\5\v\3\3\q\1\4\7\1\3\4\y\7\0\h\9\c\h\z\k\a\o\0\h\g\v\v\t\b\n\t\r\b\r\o\e\b\0\1\2\e\n\z\b\8\a\q\l\z\x\u\z\p\k\r\a\a\g\1\h\d\j\e\y\n\p\3\1\1\t\1\x\q\k\a\s\5\5\n\l\o\u\k\j\d\8\l\y\t\s\j\f\k\d\p\u\7\n\i\h\9\k\3\9\m\w\n\n\1\3\y\f\q\a\w\z\o\4\a\u\w\g\s\a\e\f\c\7\5\s\1\2\v\0\i\p\h\f\z\l\g\s\s\3\p\3\1\8\3\8\t\j\b\9\3\3\n\z\d\b\w\7\b\i\p\q\y\5\7\q\6\u\e\m\l\7\z\3\4\3\f\i\y\4\2\w\g\o\s\k\4\u\o\8\p\u\2\x\p\z\b\e\4\3\x\7\w\h\t\k\w\n\v\3\b\o\s\2\k\q\q\0\v\t\6\8\r\v\p\8\4\9\7\z\a\l\k\w\n\2\m\k\w\p\k\0 ]] 00:07:45.296 08:58:21 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:45.297 08:58:21 -- dd/uring.sh@69 -- # [[ boagdraiqczclajv5a9prm85dhm7pysuf3bffh1mcsarpqresay4bxfq7wgj74vlononxyelatyorjujkj69pzfse43f9d7x1bfu4tls1y8vf6ymeu8setkoukyg00tp7bmkb2zr42hovk9pyxdslslxcc4oq22p9zdlnqxde28wpk1xjg02zkhzklekff4umfc5a9qm7d0kh4drf2f90uczvhgpm4of18ueljvkbn8wxmy8kl223oc220ni0thg2v4pdb5gmz51mzcbghkzpwcl1uw2038n0rgb9dyzblpsdurna9c7pm65klg241dvoy9dga70bzqryr4wcdkv4sduekujklkjj2gb61c7iasluu7hitpctkpvsdjolf68kpngb1qj1ugzj4xsfnrt9pm74ade6fcpuzftho164qtegg9heymoalrblhy2hlk7qs62k8fwqm57amp5jcqdcbx7sc71ejfs1rj1n9mve023wosj4p9jrcf0oam84bz2pmfgvp24iktnagy7tpbjulsciq0727qhdxt5ju3e9tbvycvyyedywbn0exscheo0kaa921qg5blf6zz7o6t92i149fof3tgqedjw7fi95541b6gwb8nsnb6lxlx1ltxrfbehkgeazc6i3r49oq2slvrrc83hlz8aekzqk64dkvszgk2p8fc4givdpa9gicft2u7jwt5gxug90l5bgbtbl8fvjjf8o1iocizq16ubq1kumj9fwdn3yfgjpd28yt78po7m9wl3ix5v33q147134y70h9chzkao0hgvvtbntrbroeb012enzb8aqlzxuzpkraag1hdjeynp311t1xqkas55nloukjd8lytsjfkdpu7nih9k39mwnn13yfqawzo4auwgsaefc75s12v0iphfzlgss3p31838tjb933nzdbw7bipqy57q6ueml7z343fiy42wgosk4uo8pu2xpzbe43x7whtkwnv3bos2kqq0vt68rvp8497zalkwn2mkwpk0 == \b\o\a\g\d\r\a\i\q\c\z\c\l\a\j\v\5\a\9\p\r\m\8\5\d\h\m\7\p\y\s\u\f\3\b\f\f\h\1\m\c\s\a\r\p\q\r\e\s\a\y\4\b\x\f\q\7\w\g\j\7\4\v\l\o\n\o\n\x\y\e\l\a\t\y\o\r\j\u\j\k\j\6\9\p\z\f\s\e\4\3\f\9\d\7\x\1\b\f\u\4\t\l\s\1\y\8\v\f\6\y\m\e\u\8\s\e\t\k\o\u\k\y\g\0\0\t\p\7\b\m\k\b\2\z\r\4\2\h\o\v\k\9\p\y\x\d\s\l\s\l\x\c\c\4\o\q\2\2\p\9\z\d\l\n\q\x\d\e\2\8\w\p\k\1\x\j\g\0\2\z\k\h\z\k\l\e\k\f\f\4\u\m\f\c\5\a\9\q\m\7\d\0\k\h\4\d\r\f\2\f\9\0\u\c\z\v\h\g\p\m\4\o\f\1\8\u\e\l\j\v\k\b\n\8\w\x\m\y\8\k\l\2\2\3\o\c\2\2\0\n\i\0\t\h\g\2\v\4\p\d\b\5\g\m\z\5\1\m\z\c\b\g\h\k\z\p\w\c\l\1\u\w\2\0\3\8\n\0\r\g\b\9\d\y\z\b\l\p\s\d\u\r\n\a\9\c\7\p\m\6\5\k\l\g\2\4\1\d\v\o\y\9\d\g\a\7\0\b\z\q\r\y\r\4\w\c\d\k\v\4\s\d\u\e\k\u\j\k\l\k\j\j\2\g\b\6\1\c\7\i\a\s\l\u\u\7\h\i\t\p\c\t\k\p\v\s\d\j\o\l\f\6\8\k\p\n\g\b\1\q\j\1\u\g\z\j\4\x\s\f\n\r\t\9\p\m\7\4\a\d\e\6\f\c\p\u\z\f\t\h\o\1\6\4\q\t\e\g\g\9\h\e\y\m\o\a\l\r\b\l\h\y\2\h\l\k\7\q\s\6\2\k\8\f\w\q\m\5\7\a\m\p\5\j\c\q\d\c\b\x\7\s\c\7\1\e\j\f\s\1\r\j\1\n\9\m\v\e\0\2\3\w\o\s\j\4\p\9\j\r\c\f\0\o\a\m\8\4\b\z\2\p\m\f\g\v\p\2\4\i\k\t\n\a\g\y\7\t\p\b\j\u\l\s\c\i\q\0\7\2\7\q\h\d\x\t\5\j\u\3\e\9\t\b\v\y\c\v\y\y\e\d\y\w\b\n\0\e\x\s\c\h\e\o\0\k\a\a\9\2\1\q\g\5\b\l\f\6\z\z\7\o\6\t\9\2\i\1\4\9\f\o\f\3\t\g\q\e\d\j\w\7\f\i\9\5\5\4\1\b\6\g\w\b\8\n\s\n\b\6\l\x\l\x\1\l\t\x\r\f\b\e\h\k\g\e\a\z\c\6\i\3\r\4\9\o\q\2\s\l\v\r\r\c\8\3\h\l\z\8\a\e\k\z\q\k\6\4\d\k\v\s\z\g\k\2\p\8\f\c\4\g\i\v\d\p\a\9\g\i\c\f\t\2\u\7\j\w\t\5\g\x\u\g\9\0\l\5\b\g\b\t\b\l\8\f\v\j\j\f\8\o\1\i\o\c\i\z\q\1\6\u\b\q\1\k\u\m\j\9\f\w\d\n\3\y\f\g\j\p\d\2\8\y\t\7\8\p\o\7\m\9\w\l\3\i\x\5\v\3\3\q\1\4\7\1\3\4\y\7\0\h\9\c\h\z\k\a\o\0\h\g\v\v\t\b\n\t\r\b\r\o\e\b\0\1\2\e\n\z\b\8\a\q\l\z\x\u\z\p\k\r\a\a\g\1\h\d\j\e\y\n\p\3\1\1\t\1\x\q\k\a\s\5\5\n\l\o\u\k\j\d\8\l\y\t\s\j\f\k\d\p\u\7\n\i\h\9\k\3\9\m\w\n\n\1\3\y\f\q\a\w\z\o\4\a\u\w\g\s\a\e\f\c\7\5\s\1\2\v\0\i\p\h\f\z\l\g\s\s\3\p\3\1\8\3\8\t\j\b\9\3\3\n\z\d\b\w\7\b\i\p\q\y\5\7\q\6\u\e\m\l\7\z\3\4\3\f\i\y\4\2\w\g\o\s\k\4\u\o\8\p\u\2\x\p\z\b\e\4\3\x\7\w\h\t\k\w\n\v\3\b\o\s\2\k\q\q\0\v\t\6\8\r\v\p\8\4\9\7\z\a\l\k\w\n\2\m\k\w\p\k\0 ]] 00:07:45.297 08:58:21 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:45.556 08:58:22 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:45.556 08:58:22 -- dd/uring.sh@75 -- # gen_conf 00:07:45.556 08:58:22 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.556 08:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.556 [2024-11-17 08:58:22.383804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.556 [2024-11-17 08:58:22.383915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:07:45.556 { 00:07:45.556 "subsystems": [ 00:07:45.556 { 00:07:45.556 "subsystem": "bdev", 00:07:45.556 "config": [ 00:07:45.556 { 00:07:45.556 "params": { 00:07:45.556 "block_size": 512, 00:07:45.556 "num_blocks": 1048576, 00:07:45.556 "name": "malloc0" 00:07:45.556 }, 00:07:45.556 "method": "bdev_malloc_create" 00:07:45.556 }, 00:07:45.556 { 00:07:45.556 "params": { 00:07:45.556 "filename": "/dev/zram1", 00:07:45.556 "name": "uring0" 00:07:45.556 }, 00:07:45.556 "method": "bdev_uring_create" 00:07:45.556 }, 00:07:45.556 { 00:07:45.556 "method": "bdev_wait_for_examine" 00:07:45.556 } 00:07:45.556 ] 00:07:45.556 } 00:07:45.556 ] 00:07:45.556 } 00:07:45.814 [2024-11-17 08:58:22.521270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.814 [2024-11-17 08:58:22.569160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.192  [2024-11-17T08:58:25.059Z] Copying: 164/512 [MB] (164 MBps) [2024-11-17T08:58:25.997Z] Copying: 330/512 [MB] (166 MBps) [2024-11-17T08:58:25.997Z] Copying: 497/512 [MB] (167 MBps) [2024-11-17T08:58:26.256Z] Copying: 512/512 [MB] (average 165 MBps) 00:07:49.326 00:07:49.326 08:58:26 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:49.326 08:58:26 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:49.326 08:58:26 -- dd/uring.sh@87 -- # : 00:07:49.326 08:58:26 -- dd/uring.sh@87 -- # : 00:07:49.326 08:58:26 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:49.326 08:58:26 -- dd/uring.sh@87 -- # gen_conf 00:07:49.326 08:58:26 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.326 08:58:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.326 [2024-11-17 08:58:26.111161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.326 [2024-11-17 08:58:26.111258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59228 ] 00:07:49.326 { 00:07:49.326 "subsystems": [ 00:07:49.326 { 00:07:49.326 "subsystem": "bdev", 00:07:49.326 "config": [ 00:07:49.326 { 00:07:49.326 "params": { 00:07:49.326 "block_size": 512, 00:07:49.326 "num_blocks": 1048576, 00:07:49.326 "name": "malloc0" 00:07:49.326 }, 00:07:49.326 "method": "bdev_malloc_create" 00:07:49.326 }, 00:07:49.326 { 00:07:49.326 "params": { 00:07:49.326 "filename": "/dev/zram1", 00:07:49.326 "name": "uring0" 00:07:49.326 }, 00:07:49.326 "method": "bdev_uring_create" 00:07:49.326 }, 00:07:49.326 { 00:07:49.326 "params": { 00:07:49.326 "name": "uring0" 00:07:49.326 }, 00:07:49.326 "method": "bdev_uring_delete" 00:07:49.326 }, 00:07:49.326 { 00:07:49.326 "method": "bdev_wait_for_examine" 00:07:49.326 } 00:07:49.326 ] 00:07:49.326 } 00:07:49.326 ] 00:07:49.326 } 00:07:49.327 [2024-11-17 08:58:26.249590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.586 [2024-11-17 08:58:26.300390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.586  [2024-11-17T08:58:26.775Z] Copying: 0/0 [B] (average 0 Bps) 00:07:49.845 00:07:50.104 08:58:26 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:50.104 08:58:26 -- dd/uring.sh@94 -- # gen_conf 00:07:50.104 08:58:26 -- dd/uring.sh@94 -- # : 00:07:50.104 08:58:26 -- dd/common.sh@31 -- # xtrace_disable 00:07:50.104 08:58:26 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.104 08:58:26 -- common/autotest_common.sh@10 -- # set +x 00:07:50.104 08:58:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:50.104 08:58:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.104 08:58:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.104 08:58:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.104 08:58:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.104 08:58:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.104 08:58:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.104 08:58:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.104 08:58:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.104 08:58:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:50.104 [2024-11-17 08:58:26.819242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.104 [2024-11-17 08:58:26.819328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:07:50.104 { 00:07:50.104 "subsystems": [ 00:07:50.104 { 00:07:50.105 "subsystem": "bdev", 00:07:50.105 "config": [ 00:07:50.105 { 00:07:50.105 "params": { 00:07:50.105 "block_size": 512, 00:07:50.105 "num_blocks": 1048576, 00:07:50.105 "name": "malloc0" 00:07:50.105 }, 00:07:50.105 "method": "bdev_malloc_create" 00:07:50.105 }, 00:07:50.105 { 00:07:50.105 "params": { 00:07:50.105 "filename": "/dev/zram1", 00:07:50.105 "name": "uring0" 00:07:50.105 }, 00:07:50.105 "method": "bdev_uring_create" 00:07:50.105 }, 00:07:50.105 { 00:07:50.105 "params": { 00:07:50.105 "name": "uring0" 00:07:50.105 }, 00:07:50.105 "method": "bdev_uring_delete" 00:07:50.105 }, 00:07:50.105 { 00:07:50.105 "method": "bdev_wait_for_examine" 00:07:50.105 } 00:07:50.105 ] 00:07:50.105 } 00:07:50.105 ] 00:07:50.105 } 00:07:50.105 [2024-11-17 08:58:26.947228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.105 [2024-11-17 08:58:27.000434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.364 [2024-11-17 08:58:27.152176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:50.364 [2024-11-17 08:58:27.152239] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:50.364 [2024-11-17 08:58:27.152266] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:07:50.364 [2024-11-17 08:58:27.152276] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.623 [2024-11-17 08:58:27.332496] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:50.623 08:58:27 -- common/autotest_common.sh@653 -- # es=237 00:07:50.623 08:58:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.623 08:58:27 -- common/autotest_common.sh@662 -- # es=109 00:07:50.623 08:58:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.623 08:58:27 -- common/autotest_common.sh@670 -- # es=1 00:07:50.623 08:58:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.623 08:58:27 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:50.623 08:58:27 -- dd/common.sh@172 -- # local id=1 00:07:50.623 08:58:27 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:50.623 08:58:27 -- dd/common.sh@176 -- # echo 1 00:07:50.624 08:58:27 -- dd/common.sh@177 -- # echo 1 00:07:50.624 08:58:27 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:50.883 00:07:50.883 real 0m14.689s 00:07:50.883 user 0m8.204s 00:07:50.883 sys 0m5.808s 00:07:50.883 08:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.883 ************************************ 00:07:50.883 END TEST dd_uring_copy 00:07:50.883 ************************************ 00:07:50.883 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 00:07:50.883 real 0m14.930s 00:07:50.883 user 0m8.345s 00:07:50.883 sys 0m5.914s 00:07:50.883 08:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.883 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 ************************************ 00:07:50.883 END TEST spdk_dd_uring 00:07:50.883 ************************************ 00:07:50.883 08:58:27 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:50.883 08:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.883 08:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.883 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:50.883 ************************************ 00:07:50.883 START TEST spdk_dd_sparse 00:07:50.883 ************************************ 00:07:50.883 08:58:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:51.144 * Looking for test storage... 00:07:51.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:51.144 08:58:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.144 08:58:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.144 08:58:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.144 08:58:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.144 08:58:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.144 08:58:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.144 08:58:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.144 08:58:27 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.144 08:58:27 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.144 08:58:27 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.144 08:58:27 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.144 08:58:27 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.144 08:58:27 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.144 08:58:27 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.144 08:58:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.144 08:58:27 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.144 08:58:27 -- scripts/common.sh@344 -- # : 1 00:07:51.144 08:58:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.144 08:58:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.144 08:58:27 -- scripts/common.sh@364 -- # decimal 1 00:07:51.144 08:58:27 -- scripts/common.sh@352 -- # local d=1 00:07:51.144 08:58:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.144 08:58:27 -- scripts/common.sh@354 -- # echo 1 00:07:51.144 08:58:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.144 08:58:27 -- scripts/common.sh@365 -- # decimal 2 00:07:51.144 08:58:27 -- scripts/common.sh@352 -- # local d=2 00:07:51.144 08:58:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.144 08:58:27 -- scripts/common.sh@354 -- # echo 2 00:07:51.144 08:58:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.144 08:58:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.144 08:58:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.144 08:58:27 -- scripts/common.sh@367 -- # return 0 00:07:51.144 08:58:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.144 08:58:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.144 --rc genhtml_branch_coverage=1 00:07:51.144 --rc genhtml_function_coverage=1 00:07:51.144 --rc genhtml_legend=1 00:07:51.144 --rc geninfo_all_blocks=1 00:07:51.144 --rc geninfo_unexecuted_blocks=1 00:07:51.144 00:07:51.144 ' 00:07:51.144 08:58:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.144 --rc genhtml_branch_coverage=1 00:07:51.144 --rc genhtml_function_coverage=1 00:07:51.144 --rc genhtml_legend=1 00:07:51.144 --rc geninfo_all_blocks=1 00:07:51.144 --rc geninfo_unexecuted_blocks=1 00:07:51.144 00:07:51.144 ' 00:07:51.144 08:58:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.144 --rc genhtml_branch_coverage=1 00:07:51.144 --rc genhtml_function_coverage=1 00:07:51.144 --rc genhtml_legend=1 00:07:51.144 --rc geninfo_all_blocks=1 00:07:51.144 --rc geninfo_unexecuted_blocks=1 00:07:51.144 00:07:51.144 ' 00:07:51.144 08:58:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.144 --rc genhtml_branch_coverage=1 00:07:51.144 --rc genhtml_function_coverage=1 00:07:51.144 --rc genhtml_legend=1 00:07:51.144 --rc geninfo_all_blocks=1 00:07:51.144 --rc geninfo_unexecuted_blocks=1 00:07:51.144 00:07:51.144 ' 00:07:51.144 08:58:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.144 08:58:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.144 08:58:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.144 08:58:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.144 08:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.144 08:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.144 08:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.144 08:58:27 -- paths/export.sh@5 -- # export PATH 00:07:51.145 08:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.145 08:58:27 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:51.145 08:58:27 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:51.145 08:58:27 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:51.145 08:58:27 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:51.145 08:58:27 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:51.145 08:58:27 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:51.145 08:58:27 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:51.145 08:58:27 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:51.145 08:58:27 -- dd/sparse.sh@118 -- # prepare 00:07:51.145 08:58:27 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:51.145 08:58:27 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:51.145 1+0 records in 00:07:51.145 1+0 records out 00:07:51.145 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00589718 s, 711 MB/s 00:07:51.145 08:58:27 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:51.145 1+0 records in 00:07:51.145 1+0 records out 00:07:51.145 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00681763 s, 615 MB/s 00:07:51.145 08:58:27 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:51.145 1+0 records in 00:07:51.145 1+0 records out 00:07:51.145 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00374591 s, 1.1 GB/s 00:07:51.145 08:58:27 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:51.145 08:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.145 08:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.145 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:51.145 ************************************ 00:07:51.145 START TEST dd_sparse_file_to_file 00:07:51.145 ************************************ 00:07:51.145 08:58:27 -- common/autotest_common.sh@1114 -- # file_to_file 00:07:51.145 08:58:27 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:51.145 08:58:27 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:51.145 08:58:27 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:51.145 08:58:27 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:51.145 08:58:27 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:51.145 08:58:27 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:51.145 08:58:27 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:51.145 08:58:27 -- dd/sparse.sh@41 -- # gen_conf 00:07:51.145 08:58:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.145 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.145 [2024-11-17 08:58:28.048963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.145 [2024-11-17 08:58:28.049080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:07:51.145 { 00:07:51.145 "subsystems": [ 00:07:51.145 { 00:07:51.145 "subsystem": "bdev", 00:07:51.145 "config": [ 00:07:51.145 { 00:07:51.145 "params": { 00:07:51.145 "block_size": 4096, 00:07:51.145 "filename": "dd_sparse_aio_disk", 00:07:51.145 "name": "dd_aio" 00:07:51.145 }, 00:07:51.145 "method": "bdev_aio_create" 00:07:51.145 }, 00:07:51.145 { 00:07:51.145 "params": { 00:07:51.145 "lvs_name": "dd_lvstore", 00:07:51.145 "bdev_name": "dd_aio" 00:07:51.145 }, 00:07:51.145 "method": "bdev_lvol_create_lvstore" 00:07:51.145 }, 00:07:51.145 { 00:07:51.145 "method": "bdev_wait_for_examine" 00:07:51.145 } 00:07:51.145 ] 00:07:51.145 } 00:07:51.145 ] 00:07:51.145 } 00:07:51.404 [2024-11-17 08:58:28.187012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.404 [2024-11-17 08:58:28.243836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.664  [2024-11-17T08:58:28.594Z] Copying: 12/36 [MB] (average 1500 MBps) 00:07:51.664 00:07:51.664 08:58:28 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:51.664 08:58:28 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:51.664 08:58:28 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:51.664 08:58:28 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:51.664 08:58:28 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:51.664 08:58:28 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:51.664 08:58:28 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:51.664 08:58:28 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:51.664 08:58:28 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:51.664 08:58:28 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:51.664 00:07:51.664 real 0m0.557s 00:07:51.664 user 0m0.332s 00:07:51.664 sys 0m0.140s 00:07:51.664 08:58:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.664 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.664 ************************************ 00:07:51.664 END TEST dd_sparse_file_to_file 00:07:51.664 ************************************ 00:07:51.924 08:58:28 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:51.924 08:58:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.924 08:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.924 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.924 ************************************ 00:07:51.924 START TEST dd_sparse_file_to_bdev 00:07:51.924 ************************************ 00:07:51.924 08:58:28 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:07:51.924 08:58:28 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:51.924 08:58:28 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:51.924 08:58:28 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:51.924 08:58:28 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:51.924 08:58:28 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:51.924 08:58:28 -- dd/sparse.sh@73 -- # gen_conf 00:07:51.924 08:58:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.924 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:51.924 [2024-11-17 08:58:28.653830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.924 [2024-11-17 08:58:28.653919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59399 ] 00:07:51.924 { 00:07:51.924 "subsystems": [ 00:07:51.924 { 00:07:51.924 "subsystem": "bdev", 00:07:51.924 "config": [ 00:07:51.924 { 00:07:51.924 "params": { 00:07:51.924 "block_size": 4096, 00:07:51.924 "filename": "dd_sparse_aio_disk", 00:07:51.924 "name": "dd_aio" 00:07:51.924 }, 00:07:51.924 "method": "bdev_aio_create" 00:07:51.924 }, 00:07:51.924 { 00:07:51.924 "params": { 00:07:51.924 "lvs_name": "dd_lvstore", 00:07:51.924 "lvol_name": "dd_lvol", 00:07:51.924 "size": 37748736, 00:07:51.924 "thin_provision": true 00:07:51.924 }, 00:07:51.924 "method": "bdev_lvol_create" 00:07:51.924 }, 00:07:51.924 { 00:07:51.924 "method": "bdev_wait_for_examine" 00:07:51.924 } 00:07:51.924 ] 00:07:51.924 } 00:07:51.924 ] 00:07:51.924 } 00:07:51.924 [2024-11-17 08:58:28.791375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.924 [2024-11-17 08:58:28.843757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.183 [2024-11-17 08:58:28.903918] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:52.183  [2024-11-17T08:58:29.113Z] Copying: 12/36 [MB] (average 352 MBps)[2024-11-17 08:58:28.954857] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:52.442 00:07:52.442 00:07:52.442 00:07:52.442 real 0m0.562s 00:07:52.442 user 0m0.367s 00:07:52.442 sys 0m0.119s 00:07:52.442 08:58:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.442 ************************************ 00:07:52.442 END TEST dd_sparse_file_to_bdev 00:07:52.442 ************************************ 00:07:52.442 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.442 08:58:29 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:52.442 08:58:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:52.442 08:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.442 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.442 ************************************ 00:07:52.442 START TEST dd_sparse_bdev_to_file 00:07:52.442 ************************************ 00:07:52.442 08:58:29 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:07:52.442 08:58:29 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:52.442 08:58:29 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:52.442 08:58:29 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:52.442 08:58:29 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:52.442 08:58:29 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:52.442 08:58:29 -- dd/sparse.sh@91 -- # gen_conf 00:07:52.442 08:58:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.442 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.442 [2024-11-17 08:58:29.279543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.442 [2024-11-17 08:58:29.279702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59425 ] 00:07:52.442 { 00:07:52.442 "subsystems": [ 00:07:52.442 { 00:07:52.442 "subsystem": "bdev", 00:07:52.442 "config": [ 00:07:52.442 { 00:07:52.442 "params": { 00:07:52.442 "block_size": 4096, 00:07:52.442 "filename": "dd_sparse_aio_disk", 00:07:52.442 "name": "dd_aio" 00:07:52.442 }, 00:07:52.442 "method": "bdev_aio_create" 00:07:52.442 }, 00:07:52.442 { 00:07:52.442 "method": "bdev_wait_for_examine" 00:07:52.442 } 00:07:52.442 ] 00:07:52.442 } 00:07:52.442 ] 00:07:52.442 } 00:07:52.702 [2024-11-17 08:58:29.418761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.702 [2024-11-17 08:58:29.472407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.702  [2024-11-17T08:58:29.891Z] Copying: 12/36 [MB] (average 1500 MBps) 00:07:52.961 00:07:52.961 08:58:29 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:52.961 08:58:29 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:52.961 08:58:29 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:52.961 08:58:29 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:52.961 08:58:29 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:52.961 08:58:29 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:52.961 08:58:29 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:52.961 08:58:29 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:52.961 08:58:29 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:52.961 08:58:29 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:52.961 00:07:52.961 real 0m0.549s 00:07:52.961 user 0m0.336s 00:07:52.961 sys 0m0.135s 00:07:52.961 08:58:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.961 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 ************************************ 00:07:52.961 END TEST dd_sparse_bdev_to_file 00:07:52.961 ************************************ 00:07:52.961 08:58:29 -- dd/sparse.sh@1 -- # cleanup 00:07:52.961 08:58:29 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:52.961 08:58:29 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:52.961 08:58:29 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:52.961 08:58:29 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:52.961 00:07:52.961 real 0m2.048s 00:07:52.961 user 0m1.202s 00:07:52.961 sys 0m0.597s 00:07:52.961 08:58:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.961 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 ************************************ 00:07:52.961 END TEST spdk_dd_sparse 00:07:52.961 ************************************ 00:07:52.961 08:58:29 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:52.961 08:58:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:52.961 08:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.961 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:53.222 ************************************ 00:07:53.222 START TEST spdk_dd_negative 00:07:53.222 ************************************ 00:07:53.222 08:58:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:53.222 * Looking for test storage... 00:07:53.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:53.222 08:58:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.222 08:58:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.222 08:58:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.222 08:58:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.222 08:58:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.222 08:58:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.222 08:58:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.222 08:58:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.222 08:58:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.222 08:58:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.222 08:58:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.222 08:58:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.222 08:58:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.222 08:58:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.222 08:58:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.222 08:58:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.222 08:58:30 -- scripts/common.sh@344 -- # : 1 00:07:53.222 08:58:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.222 08:58:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.222 08:58:30 -- scripts/common.sh@364 -- # decimal 1 00:07:53.222 08:58:30 -- scripts/common.sh@352 -- # local d=1 00:07:53.222 08:58:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.222 08:58:30 -- scripts/common.sh@354 -- # echo 1 00:07:53.222 08:58:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.222 08:58:30 -- scripts/common.sh@365 -- # decimal 2 00:07:53.222 08:58:30 -- scripts/common.sh@352 -- # local d=2 00:07:53.222 08:58:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.222 08:58:30 -- scripts/common.sh@354 -- # echo 2 00:07:53.222 08:58:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.222 08:58:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.222 08:58:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.222 08:58:30 -- scripts/common.sh@367 -- # return 0 00:07:53.222 08:58:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.222 08:58:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.222 --rc genhtml_branch_coverage=1 00:07:53.222 --rc genhtml_function_coverage=1 00:07:53.222 --rc genhtml_legend=1 00:07:53.222 --rc geninfo_all_blocks=1 00:07:53.222 --rc geninfo_unexecuted_blocks=1 00:07:53.222 00:07:53.222 ' 00:07:53.222 08:58:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.222 --rc genhtml_branch_coverage=1 00:07:53.222 --rc genhtml_function_coverage=1 00:07:53.222 --rc genhtml_legend=1 00:07:53.222 --rc geninfo_all_blocks=1 00:07:53.222 --rc geninfo_unexecuted_blocks=1 00:07:53.222 00:07:53.222 ' 00:07:53.222 08:58:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.222 --rc genhtml_branch_coverage=1 00:07:53.222 --rc genhtml_function_coverage=1 00:07:53.222 --rc genhtml_legend=1 00:07:53.222 --rc geninfo_all_blocks=1 00:07:53.222 --rc geninfo_unexecuted_blocks=1 00:07:53.222 00:07:53.222 ' 00:07:53.222 08:58:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.222 --rc genhtml_branch_coverage=1 00:07:53.222 --rc genhtml_function_coverage=1 00:07:53.222 --rc genhtml_legend=1 00:07:53.222 --rc geninfo_all_blocks=1 00:07:53.222 --rc geninfo_unexecuted_blocks=1 00:07:53.222 00:07:53.222 ' 00:07:53.222 08:58:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.222 08:58:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.222 08:58:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.222 08:58:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.222 08:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.222 08:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.222 08:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.222 08:58:30 -- paths/export.sh@5 -- # export PATH 00:07:53.222 08:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.222 08:58:30 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.223 08:58:30 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.223 08:58:30 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.223 08:58:30 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.223 08:58:30 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:53.223 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.223 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.223 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 ************************************ 00:07:53.223 START TEST dd_invalid_arguments 00:07:53.223 ************************************ 00:07:53.223 08:58:30 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:07:53.223 08:58:30 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:53.223 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.223 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:53.223 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.223 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.223 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.223 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.223 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.223 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.223 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.223 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.223 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:53.223 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:53.223 options: 00:07:53.223 -c, --config JSON config file (default none) 00:07:53.223 --json JSON config file (default none) 00:07:53.223 --json-ignore-init-errors 00:07:53.223 don't exit on invalid config entry 00:07:53.223 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:53.223 -g, --single-file-segments 00:07:53.223 force creating just one hugetlbfs file 00:07:53.223 -h, --help show this usage 00:07:53.223 -i, --shm-id shared memory ID (optional) 00:07:53.223 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:53.223 --lcores lcore to CPU mapping list. The list is in the format: 00:07:53.223 [<,lcores[@CPUs]>...] 00:07:53.223 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:53.223 Within the group, '-' is used for range separator, 00:07:53.223 ',' is used for single number separator. 00:07:53.223 '( )' can be omitted for single element group, 00:07:53.223 '@' can be omitted if cpus and lcores have the same value 00:07:53.223 -n, --mem-channels channel number of memory channels used for DPDK 00:07:53.223 -p, --main-core main (primary) core for DPDK 00:07:53.223 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:53.223 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:53.223 --disable-cpumask-locks Disable CPU core lock files. 00:07:53.223 --silence-noticelog disable notice level logging to stderr 00:07:53.223 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:53.223 -u, --no-pci disable PCI access 00:07:53.223 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:53.223 --max-delay maximum reactor delay (in microseconds) 00:07:53.223 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:53.223 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:53.223 -R, --huge-unlink unlink huge files after initialization 00:07:53.223 -v, --version print SPDK version 00:07:53.223 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:53.223 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:53.223 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:53.223 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:53.223 Tracepoints vary in size and can use more than one trace entry. 00:07:53.223 --rpcs-allowed comma-separated list of permitted RPCS 00:07:53.223 --env-context Opaque context for use of the env implementation 00:07:53.223 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:53.223 --no-huge run without using hugepages 00:07:53.223 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:53.223 -e, --tpoint-group [:] 00:07:53.223 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:53.223 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:53.223 [2024-11-17 08:58:30.140118] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:53.483 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:53.483 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:53.483 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:53.483 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:53.483 [--------- DD Options ---------] 00:07:53.483 --if Input file. Must specify either --if or --ib. 00:07:53.483 --ib Input bdev. Must specifier either --if or --ib 00:07:53.483 --of Output file. Must specify either --of or --ob. 00:07:53.483 --ob Output bdev. Must specify either --of or --ob. 00:07:53.483 --iflag Input file flags. 00:07:53.483 --oflag Output file flags. 00:07:53.483 --bs I/O unit size (default: 4096) 00:07:53.483 --qd Queue depth (default: 2) 00:07:53.483 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:53.483 --skip Skip this many I/O units at start of input. (default: 0) 00:07:53.483 --seek Skip this many I/O units at start of output. (default: 0) 00:07:53.483 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:53.483 --sparse Enable hole skipping in input target 00:07:53.483 Available iflag and oflag values: 00:07:53.483 append - append mode 00:07:53.483 direct - use direct I/O for data 00:07:53.483 directory - fail unless a directory 00:07:53.483 dsync - use synchronized I/O for data 00:07:53.483 noatime - do not update access time 00:07:53.483 noctty - do not assign controlling terminal from file 00:07:53.483 nofollow - do not follow symlinks 00:07:53.483 nonblock - use non-blocking I/O 00:07:53.483 sync - use synchronized I/O for data and metadata 00:07:53.483 08:58:30 -- common/autotest_common.sh@653 -- # es=2 00:07:53.483 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.483 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.483 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.483 00:07:53.483 real 0m0.070s 00:07:53.483 user 0m0.042s 00:07:53.483 sys 0m0.027s 00:07:53.483 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.483 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.483 ************************************ 00:07:53.483 END TEST dd_invalid_arguments 00:07:53.483 ************************************ 00:07:53.483 08:58:30 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:53.483 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.484 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.484 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.484 ************************************ 00:07:53.484 START TEST dd_double_input 00:07:53.484 ************************************ 00:07:53.484 08:58:30 -- common/autotest_common.sh@1114 -- # double_input 00:07:53.484 08:58:30 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:53.484 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.484 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:53.484 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.484 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:53.484 [2024-11-17 08:58:30.263759] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:53.484 08:58:30 -- common/autotest_common.sh@653 -- # es=22 00:07:53.484 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.484 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.484 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.484 00:07:53.484 real 0m0.071s 00:07:53.484 user 0m0.044s 00:07:53.484 sys 0m0.026s 00:07:53.484 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.484 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.484 ************************************ 00:07:53.484 END TEST dd_double_input 00:07:53.484 ************************************ 00:07:53.484 08:58:30 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:53.484 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.484 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.484 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.484 ************************************ 00:07:53.484 START TEST dd_double_output 00:07:53.484 ************************************ 00:07:53.484 08:58:30 -- common/autotest_common.sh@1114 -- # double_output 00:07:53.484 08:58:30 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:53.484 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.484 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:53.484 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.484 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.484 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:53.484 [2024-11-17 08:58:30.386702] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:53.484 08:58:30 -- common/autotest_common.sh@653 -- # es=22 00:07:53.484 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.484 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.484 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.484 00:07:53.484 real 0m0.072s 00:07:53.484 user 0m0.045s 00:07:53.484 sys 0m0.026s 00:07:53.484 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.484 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.484 ************************************ 00:07:53.484 END TEST dd_double_output 00:07:53.484 ************************************ 00:07:53.744 08:58:30 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:53.744 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.744 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.744 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.744 ************************************ 00:07:53.744 START TEST dd_no_input 00:07:53.744 ************************************ 00:07:53.744 08:58:30 -- common/autotest_common.sh@1114 -- # no_input 00:07:53.744 08:58:30 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:53.744 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.744 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:53.744 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.744 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:53.744 [2024-11-17 08:58:30.509891] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:53.744 08:58:30 -- common/autotest_common.sh@653 -- # es=22 00:07:53.744 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.744 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.744 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.744 00:07:53.744 real 0m0.074s 00:07:53.744 user 0m0.048s 00:07:53.744 sys 0m0.025s 00:07:53.744 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.744 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.744 ************************************ 00:07:53.744 END TEST dd_no_input 00:07:53.744 ************************************ 00:07:53.744 08:58:30 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:53.744 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.744 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.744 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.744 ************************************ 00:07:53.744 START TEST dd_no_output 00:07:53.744 ************************************ 00:07:53.744 08:58:30 -- common/autotest_common.sh@1114 -- # no_output 00:07:53.744 08:58:30 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.744 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.744 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.744 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.744 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.744 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.744 [2024-11-17 08:58:30.637862] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:53.744 08:58:30 -- common/autotest_common.sh@653 -- # es=22 00:07:53.744 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.744 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.744 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.744 00:07:53.744 real 0m0.075s 00:07:53.744 user 0m0.043s 00:07:53.744 sys 0m0.030s 00:07:53.744 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.744 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.744 ************************************ 00:07:53.744 END TEST dd_no_output 00:07:53.744 ************************************ 00:07:54.004 08:58:30 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:54.004 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.004 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.004 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:54.004 ************************************ 00:07:54.004 START TEST dd_wrong_blocksize 00:07:54.004 ************************************ 00:07:54.004 08:58:30 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:07:54.004 08:58:30 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:54.004 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:54.005 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:54.005 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.005 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:54.005 [2024-11-17 08:58:30.761108] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:54.005 ************************************ 00:07:54.005 END TEST dd_wrong_blocksize 00:07:54.005 ************************************ 00:07:54.005 08:58:30 -- common/autotest_common.sh@653 -- # es=22 00:07:54.005 08:58:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.005 08:58:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.005 08:58:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.005 00:07:54.005 real 0m0.074s 00:07:54.005 user 0m0.045s 00:07:54.005 sys 0m0.028s 00:07:54.005 08:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.005 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:54.005 08:58:30 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:54.005 08:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.005 08:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.005 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:54.005 ************************************ 00:07:54.005 START TEST dd_smaller_blocksize 00:07:54.005 ************************************ 00:07:54.005 08:58:30 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:07:54.005 08:58:30 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:54.005 08:58:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:54.005 08:58:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:54.005 08:58:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.005 08:58:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.005 08:58:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:54.005 [2024-11-17 08:58:30.892024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.005 [2024-11-17 08:58:30.892137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:07:54.264 [2024-11-17 08:58:31.033591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.264 [2024-11-17 08:58:31.104780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.524 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:54.524 [2024-11-17 08:58:31.430525] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:54.524 [2024-11-17 08:58:31.430656] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.784 [2024-11-17 08:58:31.496957] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:54.784 08:58:31 -- common/autotest_common.sh@653 -- # es=244 00:07:54.784 08:58:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.784 08:58:31 -- common/autotest_common.sh@662 -- # es=116 00:07:54.784 08:58:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:54.784 08:58:31 -- common/autotest_common.sh@670 -- # es=1 00:07:54.784 08:58:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.784 00:07:54.784 real 0m0.771s 00:07:54.784 user 0m0.357s 00:07:54.784 sys 0m0.308s 00:07:54.784 08:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.784 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.784 ************************************ 00:07:54.784 END TEST dd_smaller_blocksize 00:07:54.784 ************************************ 00:07:54.784 08:58:31 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:54.784 08:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.784 08:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.784 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.784 ************************************ 00:07:54.784 START TEST dd_invalid_count 00:07:54.784 ************************************ 00:07:54.784 08:58:31 -- common/autotest_common.sh@1114 -- # invalid_count 00:07:54.784 08:58:31 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.784 08:58:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:54.784 08:58:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.784 08:58:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.784 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.784 08:58:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.784 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.784 08:58:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.784 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.784 08:58:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.784 08:58:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.784 08:58:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.784 [2024-11-17 08:58:31.707884] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:55.044 08:58:31 -- common/autotest_common.sh@653 -- # es=22 00:07:55.044 08:58:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.044 08:58:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.044 08:58:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.044 00:07:55.044 real 0m0.076s 00:07:55.044 user 0m0.042s 00:07:55.044 sys 0m0.033s 00:07:55.044 08:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.044 ************************************ 00:07:55.044 END TEST dd_invalid_count 00:07:55.044 ************************************ 00:07:55.044 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:55.044 08:58:31 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:55.044 08:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.044 08:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.044 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:55.044 ************************************ 00:07:55.044 START TEST dd_invalid_oflag 00:07:55.044 ************************************ 00:07:55.044 08:58:31 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:07:55.044 08:58:31 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.044 08:58:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:55.044 [2024-11-17 08:58:31.838046] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:55.044 08:58:31 -- common/autotest_common.sh@653 -- # es=22 00:07:55.044 08:58:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.044 08:58:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.044 08:58:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.044 00:07:55.044 real 0m0.077s 00:07:55.044 user 0m0.052s 00:07:55.044 sys 0m0.023s 00:07:55.044 08:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.044 ************************************ 00:07:55.044 END TEST dd_invalid_oflag 00:07:55.044 ************************************ 00:07:55.044 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:55.044 08:58:31 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:55.044 08:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.044 08:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.044 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:55.044 ************************************ 00:07:55.044 START TEST dd_invalid_iflag 00:07:55.044 ************************************ 00:07:55.044 08:58:31 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:07:55.044 08:58:31 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:55.044 08:58:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.044 08:58:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.044 08:58:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:55.044 [2024-11-17 08:58:31.960568] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:55.304 08:58:31 -- common/autotest_common.sh@653 -- # es=22 00:07:55.304 08:58:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.304 08:58:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.304 08:58:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.304 00:07:55.304 real 0m0.070s 00:07:55.304 user 0m0.043s 00:07:55.304 sys 0m0.027s 00:07:55.304 08:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.304 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:55.304 ************************************ 00:07:55.304 END TEST dd_invalid_iflag 00:07:55.304 ************************************ 00:07:55.304 08:58:32 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:55.304 08:58:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.304 08:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.304 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.304 ************************************ 00:07:55.304 START TEST dd_unknown_flag 00:07:55.304 ************************************ 00:07:55.304 08:58:32 -- common/autotest_common.sh@1114 -- # unknown_flag 00:07:55.304 08:58:32 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:55.304 08:58:32 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.304 08:58:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:55.304 08:58:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.304 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.304 08:58:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.304 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.304 08:58:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.304 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.304 08:58:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.304 08:58:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.304 08:58:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:55.304 [2024-11-17 08:58:32.086581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.304 [2024-11-17 08:58:32.086691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:07:55.304 [2024-11-17 08:58:32.222958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.564 [2024-11-17 08:58:32.274851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.564 [2024-11-17 08:58:32.321175] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:55.564 [2024-11-17 08:58:32.321265] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:55.564 [2024-11-17 08:58:32.321284] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:55.564 [2024-11-17 08:58:32.321300] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.564 [2024-11-17 08:58:32.386583] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:55.823 08:58:32 -- common/autotest_common.sh@653 -- # es=236 00:07:55.823 08:58:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.823 08:58:32 -- common/autotest_common.sh@662 -- # es=108 00:07:55.823 08:58:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:55.823 08:58:32 -- common/autotest_common.sh@670 -- # es=1 00:07:55.823 08:58:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.823 00:07:55.823 real 0m0.464s 00:07:55.823 user 0m0.262s 00:07:55.823 sys 0m0.098s 00:07:55.823 08:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.823 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.823 ************************************ 00:07:55.823 END TEST dd_unknown_flag 00:07:55.823 ************************************ 00:07:55.823 08:58:32 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:55.823 08:58:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.823 08:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.823 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.823 ************************************ 00:07:55.823 START TEST dd_invalid_json 00:07:55.823 ************************************ 00:07:55.823 08:58:32 -- common/autotest_common.sh@1114 -- # invalid_json 00:07:55.823 08:58:32 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.823 08:58:32 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.823 08:58:32 -- dd/negative_dd.sh@95 -- # : 00:07:55.823 08:58:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.823 08:58:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.824 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.824 08:58:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.824 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.824 08:58:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.824 08:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.824 08:58:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.824 08:58:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.824 08:58:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.824 [2024-11-17 08:58:32.598776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.824 [2024-11-17 08:58:32.598892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:07:55.824 [2024-11-17 08:58:32.737468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.083 [2024-11-17 08:58:32.787431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.083 [2024-11-17 08:58:32.787592] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:56.083 [2024-11-17 08:58:32.787637] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.083 [2024-11-17 08:58:32.787704] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:56.083 08:58:32 -- common/autotest_common.sh@653 -- # es=234 00:07:56.083 08:58:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.083 08:58:32 -- common/autotest_common.sh@662 -- # es=106 00:07:56.083 08:58:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.083 08:58:32 -- common/autotest_common.sh@670 -- # es=1 00:07:56.083 08:58:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.083 00:07:56.083 real 0m0.336s 00:07:56.083 user 0m0.170s 00:07:56.083 sys 0m0.064s 00:07:56.083 08:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.083 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:56.083 ************************************ 00:07:56.083 END TEST dd_invalid_json 00:07:56.083 ************************************ 00:07:56.083 00:07:56.083 real 0m3.029s 00:07:56.083 user 0m1.490s 00:07:56.083 sys 0m1.186s 00:07:56.083 08:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.083 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:56.083 ************************************ 00:07:56.083 END TEST spdk_dd_negative 00:07:56.083 ************************************ 00:07:56.083 00:07:56.083 real 1m6.711s 00:07:56.083 user 0m41.275s 00:07:56.083 sys 0m16.181s 00:07:56.083 08:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.083 ************************************ 00:07:56.083 END TEST spdk_dd 00:07:56.083 ************************************ 00:07:56.083 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:56.083 08:58:32 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:56.083 08:58:32 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:56.083 08:58:32 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:56.083 08:58:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.083 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.344 08:58:33 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:56.344 08:58:33 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:56.344 08:58:33 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:56.344 08:58:33 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:56.344 08:58:33 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:56.344 08:58:33 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:56.344 08:58:33 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.344 08:58:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.344 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.344 ************************************ 00:07:56.344 START TEST nvmf_tcp 00:07:56.344 ************************************ 00:07:56.344 08:58:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.344 * Looking for test storage... 00:07:56.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:56.344 08:58:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.344 08:58:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.344 08:58:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.344 08:58:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.344 08:58:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.344 08:58:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.344 08:58:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.344 08:58:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.344 08:58:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.344 08:58:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.344 08:58:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.344 08:58:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.344 08:58:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.344 08:58:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.344 08:58:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.344 08:58:33 -- scripts/common.sh@344 -- # : 1 00:07:56.344 08:58:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.344 08:58:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.344 08:58:33 -- scripts/common.sh@364 -- # decimal 1 00:07:56.344 08:58:33 -- scripts/common.sh@352 -- # local d=1 00:07:56.344 08:58:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.344 08:58:33 -- scripts/common.sh@354 -- # echo 1 00:07:56.344 08:58:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.344 08:58:33 -- scripts/common.sh@365 -- # decimal 2 00:07:56.344 08:58:33 -- scripts/common.sh@352 -- # local d=2 00:07:56.344 08:58:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.344 08:58:33 -- scripts/common.sh@354 -- # echo 2 00:07:56.344 08:58:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.344 08:58:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.344 08:58:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.344 08:58:33 -- scripts/common.sh@367 -- # return 0 00:07:56.344 08:58:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.344 --rc genhtml_branch_coverage=1 00:07:56.344 --rc genhtml_function_coverage=1 00:07:56.344 --rc genhtml_legend=1 00:07:56.344 --rc geninfo_all_blocks=1 00:07:56.344 --rc geninfo_unexecuted_blocks=1 00:07:56.344 00:07:56.344 ' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.344 --rc genhtml_branch_coverage=1 00:07:56.344 --rc genhtml_function_coverage=1 00:07:56.344 --rc genhtml_legend=1 00:07:56.344 --rc geninfo_all_blocks=1 00:07:56.344 --rc geninfo_unexecuted_blocks=1 00:07:56.344 00:07:56.344 ' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.344 --rc genhtml_branch_coverage=1 00:07:56.344 --rc genhtml_function_coverage=1 00:07:56.344 --rc genhtml_legend=1 00:07:56.344 --rc geninfo_all_blocks=1 00:07:56.344 --rc geninfo_unexecuted_blocks=1 00:07:56.344 00:07:56.344 ' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.344 --rc genhtml_branch_coverage=1 00:07:56.344 --rc genhtml_function_coverage=1 00:07:56.344 --rc genhtml_legend=1 00:07:56.344 --rc geninfo_all_blocks=1 00:07:56.344 --rc geninfo_unexecuted_blocks=1 00:07:56.344 00:07:56.344 ' 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.344 08:58:33 -- nvmf/common.sh@7 -- # uname -s 00:07:56.344 08:58:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.344 08:58:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.344 08:58:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.344 08:58:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.344 08:58:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.344 08:58:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.344 08:58:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.344 08:58:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.344 08:58:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.344 08:58:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.344 08:58:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:07:56.344 08:58:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:07:56.344 08:58:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.344 08:58:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.344 08:58:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.344 08:58:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.344 08:58:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.344 08:58:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.344 08:58:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.344 08:58:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.344 08:58:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.344 08:58:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.344 08:58:33 -- paths/export.sh@5 -- # export PATH 00:07:56.344 08:58:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.344 08:58:33 -- nvmf/common.sh@46 -- # : 0 00:07:56.344 08:58:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:56.344 08:58:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:56.344 08:58:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:56.344 08:58:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.344 08:58:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.344 08:58:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:56.344 08:58:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:56.344 08:58:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:56.344 08:58:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.344 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:56.344 08:58:33 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:56.344 08:58:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.344 08:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.344 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.344 ************************************ 00:07:56.344 START TEST nvmf_host_management 00:07:56.344 ************************************ 00:07:56.344 08:58:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:56.604 * Looking for test storage... 00:07:56.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.604 08:58:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.604 08:58:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.604 08:58:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.604 08:58:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.604 08:58:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.604 08:58:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.604 08:58:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.604 08:58:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.604 08:58:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.604 08:58:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.604 08:58:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.604 08:58:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.604 08:58:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.604 08:58:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.604 08:58:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.604 08:58:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.604 08:58:33 -- scripts/common.sh@344 -- # : 1 00:07:56.604 08:58:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.604 08:58:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.604 08:58:33 -- scripts/common.sh@364 -- # decimal 1 00:07:56.604 08:58:33 -- scripts/common.sh@352 -- # local d=1 00:07:56.604 08:58:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.604 08:58:33 -- scripts/common.sh@354 -- # echo 1 00:07:56.604 08:58:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.604 08:58:33 -- scripts/common.sh@365 -- # decimal 2 00:07:56.604 08:58:33 -- scripts/common.sh@352 -- # local d=2 00:07:56.604 08:58:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.604 08:58:33 -- scripts/common.sh@354 -- # echo 2 00:07:56.604 08:58:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.604 08:58:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.604 08:58:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.604 08:58:33 -- scripts/common.sh@367 -- # return 0 00:07:56.604 08:58:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.604 08:58:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.604 --rc genhtml_branch_coverage=1 00:07:56.604 --rc genhtml_function_coverage=1 00:07:56.604 --rc genhtml_legend=1 00:07:56.604 --rc geninfo_all_blocks=1 00:07:56.604 --rc geninfo_unexecuted_blocks=1 00:07:56.604 00:07:56.604 ' 00:07:56.604 08:58:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.604 --rc genhtml_branch_coverage=1 00:07:56.604 --rc genhtml_function_coverage=1 00:07:56.604 --rc genhtml_legend=1 00:07:56.604 --rc geninfo_all_blocks=1 00:07:56.604 --rc geninfo_unexecuted_blocks=1 00:07:56.604 00:07:56.604 ' 00:07:56.604 08:58:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.604 --rc genhtml_branch_coverage=1 00:07:56.604 --rc genhtml_function_coverage=1 00:07:56.604 --rc genhtml_legend=1 00:07:56.604 --rc geninfo_all_blocks=1 00:07:56.604 --rc geninfo_unexecuted_blocks=1 00:07:56.604 00:07:56.604 ' 00:07:56.604 08:58:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.604 --rc genhtml_branch_coverage=1 00:07:56.604 --rc genhtml_function_coverage=1 00:07:56.604 --rc genhtml_legend=1 00:07:56.604 --rc geninfo_all_blocks=1 00:07:56.604 --rc geninfo_unexecuted_blocks=1 00:07:56.604 00:07:56.604 ' 00:07:56.604 08:58:33 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.604 08:58:33 -- nvmf/common.sh@7 -- # uname -s 00:07:56.604 08:58:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.604 08:58:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.604 08:58:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.604 08:58:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.604 08:58:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.604 08:58:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.604 08:58:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.604 08:58:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.604 08:58:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.604 08:58:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.604 08:58:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:07:56.604 08:58:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:07:56.604 08:58:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.604 08:58:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.604 08:58:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.604 08:58:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.604 08:58:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.604 08:58:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.604 08:58:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.604 08:58:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.604 08:58:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.604 08:58:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.604 08:58:33 -- paths/export.sh@5 -- # export PATH 00:07:56.604 08:58:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.604 08:58:33 -- nvmf/common.sh@46 -- # : 0 00:07:56.604 08:58:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:56.604 08:58:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:56.604 08:58:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:56.604 08:58:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.604 08:58:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.604 08:58:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:56.604 08:58:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:56.604 08:58:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:56.604 08:58:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.604 08:58:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.604 08:58:33 -- target/host_management.sh@104 -- # nvmftestinit 00:07:56.604 08:58:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:56.604 08:58:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.604 08:58:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:56.604 08:58:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:56.604 08:58:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:56.604 08:58:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.604 08:58:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.604 08:58:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.604 08:58:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:56.605 08:58:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:56.605 08:58:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:56.605 08:58:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:56.605 08:58:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:56.605 08:58:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:56.605 08:58:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.605 08:58:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.605 08:58:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:56.605 08:58:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:56.605 08:58:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.605 08:58:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.605 08:58:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.605 08:58:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.605 08:58:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.605 08:58:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.605 08:58:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.605 08:58:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.605 08:58:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:56.605 Cannot find device "nvmf_init_br" 00:07:56.605 08:58:33 -- nvmf/common.sh@153 -- # true 00:07:56.605 08:58:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:56.605 Cannot find device "nvmf_tgt_br" 00:07:56.605 08:58:33 -- nvmf/common.sh@154 -- # true 00:07:56.605 08:58:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.605 Cannot find device "nvmf_tgt_br2" 00:07:56.605 08:58:33 -- nvmf/common.sh@155 -- # true 00:07:56.605 08:58:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:56.605 Cannot find device "nvmf_init_br" 00:07:56.605 08:58:33 -- nvmf/common.sh@156 -- # true 00:07:56.605 08:58:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:56.605 Cannot find device "nvmf_tgt_br" 00:07:56.605 08:58:33 -- nvmf/common.sh@157 -- # true 00:07:56.605 08:58:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:56.864 Cannot find device "nvmf_tgt_br2" 00:07:56.864 08:58:33 -- nvmf/common.sh@158 -- # true 00:07:56.864 08:58:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:56.864 Cannot find device "nvmf_br" 00:07:56.864 08:58:33 -- nvmf/common.sh@159 -- # true 00:07:56.864 08:58:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:56.864 Cannot find device "nvmf_init_if" 00:07:56.864 08:58:33 -- nvmf/common.sh@160 -- # true 00:07:56.864 08:58:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.864 08:58:33 -- nvmf/common.sh@161 -- # true 00:07:56.864 08:58:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.864 08:58:33 -- nvmf/common.sh@162 -- # true 00:07:56.864 08:58:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.864 08:58:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.864 08:58:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.864 08:58:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.864 08:58:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:56.864 08:58:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.864 08:58:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.864 08:58:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:56.864 08:58:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:56.864 08:58:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:56.864 08:58:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:56.864 08:58:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:56.864 08:58:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:56.864 08:58:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.865 08:58:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.865 08:58:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.865 08:58:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:56.865 08:58:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:56.865 08:58:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.865 08:58:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.865 08:58:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.865 08:58:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.125 08:58:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.125 08:58:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:57.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:07:57.125 00:07:57.125 --- 10.0.0.2 ping statistics --- 00:07:57.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.125 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:07:57.125 08:58:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:57.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:07:57.125 00:07:57.125 --- 10.0.0.3 ping statistics --- 00:07:57.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.125 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:57.125 08:58:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:57.125 00:07:57.125 --- 10.0.0.1 ping statistics --- 00:07:57.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.125 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:57.125 08:58:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.125 08:58:33 -- nvmf/common.sh@421 -- # return 0 00:07:57.125 08:58:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:57.125 08:58:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.125 08:58:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:57.125 08:58:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:57.125 08:58:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.125 08:58:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:57.125 08:58:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:57.125 08:58:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:57.125 08:58:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.125 08:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.125 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:57.125 ************************************ 00:07:57.125 START TEST nvmf_host_management 00:07:57.125 ************************************ 00:07:57.125 08:58:33 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:07:57.125 08:58:33 -- target/host_management.sh@69 -- # starttarget 00:07:57.125 08:58:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:57.125 08:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:57.125 08:58:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.125 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:57.125 08:58:33 -- nvmf/common.sh@469 -- # nvmfpid=60052 00:07:57.125 08:58:33 -- nvmf/common.sh@470 -- # waitforlisten 60052 00:07:57.125 08:58:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:57.125 08:58:33 -- common/autotest_common.sh@829 -- # '[' -z 60052 ']' 00:07:57.125 08:58:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.125 08:58:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.125 08:58:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.125 08:58:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.125 08:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:57.125 [2024-11-17 08:58:33.927725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.125 [2024-11-17 08:58:33.927825] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.384 [2024-11-17 08:58:34.070921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.384 [2024-11-17 08:58:34.145956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.384 [2024-11-17 08:58:34.146347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.384 [2024-11-17 08:58:34.146485] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.384 [2024-11-17 08:58:34.146657] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.384 [2024-11-17 08:58:34.147000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.385 [2024-11-17 08:58:34.147241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.385 [2024-11-17 08:58:34.147146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.385 [2024-11-17 08:58:34.147084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.323 08:58:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.323 08:58:34 -- common/autotest_common.sh@862 -- # return 0 00:07:58.323 08:58:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:58.323 08:58:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.323 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 08:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.323 08:58:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.323 08:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.323 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 [2024-11-17 08:58:34.956928] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.323 08:58:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.323 08:58:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:58.323 08:58:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.323 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 08:58:34 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:58.323 08:58:34 -- target/host_management.sh@23 -- # cat 00:07:58.323 08:58:34 -- target/host_management.sh@30 -- # rpc_cmd 00:07:58.323 08:58:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.323 08:58:34 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 Malloc0 00:07:58.323 [2024-11-17 08:58:35.033636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.323 08:58:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.323 08:58:35 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:58.323 08:58:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.323 08:58:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 08:58:35 -- target/host_management.sh@73 -- # perfpid=60106 00:07:58.323 08:58:35 -- target/host_management.sh@74 -- # waitforlisten 60106 /var/tmp/bdevperf.sock 00:07:58.323 08:58:35 -- common/autotest_common.sh@829 -- # '[' -z 60106 ']' 00:07:58.323 08:58:35 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:58.323 08:58:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.323 08:58:35 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:58.323 08:58:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.323 08:58:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.323 08:58:35 -- nvmf/common.sh@520 -- # config=() 00:07:58.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.323 08:58:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.323 08:58:35 -- nvmf/common.sh@520 -- # local subsystem config 00:07:58.323 08:58:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 08:58:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:58.323 08:58:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:58.323 { 00:07:58.323 "params": { 00:07:58.323 "name": "Nvme$subsystem", 00:07:58.323 "trtype": "$TEST_TRANSPORT", 00:07:58.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:58.323 "adrfam": "ipv4", 00:07:58.323 "trsvcid": "$NVMF_PORT", 00:07:58.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:58.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:58.323 "hdgst": ${hdgst:-false}, 00:07:58.323 "ddgst": ${ddgst:-false} 00:07:58.323 }, 00:07:58.323 "method": "bdev_nvme_attach_controller" 00:07:58.323 } 00:07:58.323 EOF 00:07:58.323 )") 00:07:58.323 08:58:35 -- nvmf/common.sh@542 -- # cat 00:07:58.323 08:58:35 -- nvmf/common.sh@544 -- # jq . 00:07:58.323 08:58:35 -- nvmf/common.sh@545 -- # IFS=, 00:07:58.323 08:58:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:58.323 "params": { 00:07:58.323 "name": "Nvme0", 00:07:58.323 "trtype": "tcp", 00:07:58.323 "traddr": "10.0.0.2", 00:07:58.323 "adrfam": "ipv4", 00:07:58.323 "trsvcid": "4420", 00:07:58.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:58.323 "hdgst": false, 00:07:58.323 "ddgst": false 00:07:58.323 }, 00:07:58.323 "method": "bdev_nvme_attach_controller" 00:07:58.323 }' 00:07:58.323 [2024-11-17 08:58:35.144242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.323 [2024-11-17 08:58:35.144361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:07:58.583 [2024-11-17 08:58:35.288071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.583 [2024-11-17 08:58:35.358085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.583 Running I/O for 10 seconds... 00:07:59.522 08:58:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.522 08:58:36 -- common/autotest_common.sh@862 -- # return 0 00:07:59.522 08:58:36 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:59.522 08:58:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.522 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.522 08:58:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.522 08:58:36 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.522 08:58:36 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:59.522 08:58:36 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:59.522 08:58:36 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:59.522 08:58:36 -- target/host_management.sh@52 -- # local ret=1 00:07:59.522 08:58:36 -- target/host_management.sh@53 -- # local i 00:07:59.522 08:58:36 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:59.522 08:58:36 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:59.522 08:58:36 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:59.522 08:58:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.522 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.522 08:58:36 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:59.522 08:58:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.522 08:58:36 -- target/host_management.sh@55 -- # read_io_count=1823 00:07:59.522 08:58:36 -- target/host_management.sh@58 -- # '[' 1823 -ge 100 ']' 00:07:59.522 08:58:36 -- target/host_management.sh@59 -- # ret=0 00:07:59.522 08:58:36 -- target/host_management.sh@60 -- # break 00:07:59.522 08:58:36 -- target/host_management.sh@64 -- # return 0 00:07:59.522 08:58:36 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:59.522 08:58:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.522 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.522 [2024-11-17 08:58:36.194800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.194993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.522 [2024-11-17 08:58:36.195084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f4d00 is same with the state(5) to be set 00:07:59.523 [2024-11-17 08:58:36.195330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.523 [2024-11-17 08:58:36.196174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.523 [2024-11-17 08:58:36.196192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.196978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.196987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.524 [2024-11-17 08:58:36.197185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.524 [2024-11-17 08:58:36.197198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501400 is same with the state(5) to be set 00:07:59.524 [2024-11-17 08:58:36.197258] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1501400 was disconnected and freed. reset controller. 00:07:59.524 task offset: 121088 on job bdev=Nvme0n1 fails 00:07:59.524 00:07:59.524 Latency(us) 00:07:59.524 [2024-11-17T08:58:36.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.524 [2024-11-17T08:58:36.454Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:59.524 [2024-11-17T08:58:36.454Z] Job: Nvme0n1 ended in about 0.70 seconds with error 00:07:59.524 Verification LBA range: start 0x0 length 0x400 00:07:59.524 Nvme0n1 : 0.70 2791.86 174.49 91.58 0.00 21817.86 7417.48 29789.09 00:07:59.524 [2024-11-17T08:58:36.454Z] =================================================================================================================== 00:07:59.524 [2024-11-17T08:58:36.454Z] Total : 2791.86 174.49 91.58 0.00 21817.86 7417.48 29789.09 00:07:59.524 [2024-11-17 08:58:36.198491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:59.525 [2024-11-17 08:58:36.200543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.525 [2024-11-17 08:58:36.200568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527150 (9): Bad file descriptor 00:07:59.525 [2024-11-17 08:58:36.204110] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:59.525 [2024-11-17 08:58:36.204195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:59.525 [2024-11-17 08:58:36.204221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:59.525 [2024-11-17 08:58:36.204236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:59.525 [2024-11-17 08:58:36.204246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:59.525 [2024-11-17 08:58:36.204255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:59.525 [2024-11-17 08:58:36.204263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1527150 00:07:59.525 [2024-11-17 08:58:36.204296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527150 (9): Bad file descriptor 00:07:59.525 [2024-11-17 08:58:36.204315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:59.525 [2024-11-17 08:58:36.204323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:59.525 [2024-11-17 08:58:36.204333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:59.525 [2024-11-17 08:58:36.204349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:59.525 08:58:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.525 08:58:36 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:59.525 08:58:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.525 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.525 08:58:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.525 08:58:36 -- target/host_management.sh@87 -- # sleep 1 00:08:00.463 08:58:37 -- target/host_management.sh@91 -- # kill -9 60106 00:08:00.463 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60106) - No such process 00:08:00.463 08:58:37 -- target/host_management.sh@91 -- # true 00:08:00.463 08:58:37 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:00.463 08:58:37 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:00.463 08:58:37 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:00.463 08:58:37 -- nvmf/common.sh@520 -- # config=() 00:08:00.463 08:58:37 -- nvmf/common.sh@520 -- # local subsystem config 00:08:00.463 08:58:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:00.463 08:58:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:00.463 { 00:08:00.463 "params": { 00:08:00.463 "name": "Nvme$subsystem", 00:08:00.463 "trtype": "$TEST_TRANSPORT", 00:08:00.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.463 "adrfam": "ipv4", 00:08:00.463 "trsvcid": "$NVMF_PORT", 00:08:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.463 "hdgst": ${hdgst:-false}, 00:08:00.463 "ddgst": ${ddgst:-false} 00:08:00.463 }, 00:08:00.463 "method": "bdev_nvme_attach_controller" 00:08:00.463 } 00:08:00.463 EOF 00:08:00.463 )") 00:08:00.463 08:58:37 -- nvmf/common.sh@542 -- # cat 00:08:00.463 08:58:37 -- nvmf/common.sh@544 -- # jq . 00:08:00.463 08:58:37 -- nvmf/common.sh@545 -- # IFS=, 00:08:00.463 08:58:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:00.463 "params": { 00:08:00.463 "name": "Nvme0", 00:08:00.463 "trtype": "tcp", 00:08:00.463 "traddr": "10.0.0.2", 00:08:00.463 "adrfam": "ipv4", 00:08:00.463 "trsvcid": "4420", 00:08:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.463 "hdgst": false, 00:08:00.463 "ddgst": false 00:08:00.463 }, 00:08:00.463 "method": "bdev_nvme_attach_controller" 00:08:00.463 }' 00:08:00.463 [2024-11-17 08:58:37.270227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.463 [2024-11-17 08:58:37.270319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:08:00.722 [2024-11-17 08:58:37.401989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.722 [2024-11-17 08:58:37.459285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.722 Running I/O for 1 seconds... 00:08:02.103 00:08:02.103 Latency(us) 00:08:02.103 [2024-11-17T08:58:39.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.103 [2024-11-17T08:58:39.033Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:02.103 Verification LBA range: start 0x0 length 0x400 00:08:02.103 Nvme0n1 : 1.01 3010.97 188.19 0.00 0.00 20930.76 1295.83 26214.40 00:08:02.103 [2024-11-17T08:58:39.033Z] =================================================================================================================== 00:08:02.103 [2024-11-17T08:58:39.033Z] Total : 3010.97 188.19 0.00 0.00 20930.76 1295.83 26214.40 00:08:02.103 08:58:38 -- target/host_management.sh@101 -- # stoptarget 00:08:02.103 08:58:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:02.103 08:58:38 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:02.103 08:58:38 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:02.103 08:58:38 -- target/host_management.sh@40 -- # nvmftestfini 00:08:02.103 08:58:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:02.103 08:58:38 -- nvmf/common.sh@116 -- # sync 00:08:02.103 08:58:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:02.103 08:58:38 -- nvmf/common.sh@119 -- # set +e 00:08:02.103 08:58:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:02.103 08:58:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:02.103 rmmod nvme_tcp 00:08:02.103 rmmod nvme_fabrics 00:08:02.103 rmmod nvme_keyring 00:08:02.103 08:58:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:02.103 08:58:38 -- nvmf/common.sh@123 -- # set -e 00:08:02.103 08:58:38 -- nvmf/common.sh@124 -- # return 0 00:08:02.103 08:58:38 -- nvmf/common.sh@477 -- # '[' -n 60052 ']' 00:08:02.103 08:58:38 -- nvmf/common.sh@478 -- # killprocess 60052 00:08:02.103 08:58:38 -- common/autotest_common.sh@936 -- # '[' -z 60052 ']' 00:08:02.103 08:58:38 -- common/autotest_common.sh@940 -- # kill -0 60052 00:08:02.103 08:58:38 -- common/autotest_common.sh@941 -- # uname 00:08:02.103 08:58:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.103 08:58:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60052 00:08:02.103 killing process with pid 60052 00:08:02.103 08:58:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:02.103 08:58:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:02.103 08:58:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60052' 00:08:02.103 08:58:38 -- common/autotest_common.sh@955 -- # kill 60052 00:08:02.103 08:58:38 -- common/autotest_common.sh@960 -- # wait 60052 00:08:02.362 [2024-11-17 08:58:39.133051] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:02.362 08:58:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:02.362 08:58:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:02.362 08:58:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:02.362 08:58:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.362 08:58:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:02.362 08:58:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.362 08:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.362 08:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.362 08:58:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:02.362 00:08:02.362 real 0m5.337s 00:08:02.362 user 0m22.501s 00:08:02.362 sys 0m1.150s 00:08:02.362 08:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.362 08:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 ************************************ 00:08:02.362 END TEST nvmf_host_management 00:08:02.362 ************************************ 00:08:02.362 08:58:39 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:02.362 00:08:02.362 real 0m5.980s 00:08:02.362 user 0m22.706s 00:08:02.362 sys 0m1.398s 00:08:02.362 08:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.362 08:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:02.362 ************************************ 00:08:02.362 END TEST nvmf_host_management 00:08:02.362 ************************************ 00:08:02.362 08:58:39 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:02.362 08:58:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.362 08:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.362 08:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:02.621 ************************************ 00:08:02.621 START TEST nvmf_lvol 00:08:02.621 ************************************ 00:08:02.621 08:58:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:02.621 * Looking for test storage... 00:08:02.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.621 08:58:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.621 08:58:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.622 08:58:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.622 08:58:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.622 08:58:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.622 08:58:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.622 08:58:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.622 08:58:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.622 08:58:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.622 08:58:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.622 08:58:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.622 08:58:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.622 08:58:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.622 08:58:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.622 08:58:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.622 08:58:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.622 08:58:39 -- scripts/common.sh@344 -- # : 1 00:08:02.622 08:58:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.622 08:58:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.622 08:58:39 -- scripts/common.sh@364 -- # decimal 1 00:08:02.622 08:58:39 -- scripts/common.sh@352 -- # local d=1 00:08:02.622 08:58:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.622 08:58:39 -- scripts/common.sh@354 -- # echo 1 00:08:02.622 08:58:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.622 08:58:39 -- scripts/common.sh@365 -- # decimal 2 00:08:02.622 08:58:39 -- scripts/common.sh@352 -- # local d=2 00:08:02.622 08:58:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.622 08:58:39 -- scripts/common.sh@354 -- # echo 2 00:08:02.622 08:58:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.622 08:58:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.622 08:58:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.622 08:58:39 -- scripts/common.sh@367 -- # return 0 00:08:02.622 08:58:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.622 08:58:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.622 --rc genhtml_branch_coverage=1 00:08:02.622 --rc genhtml_function_coverage=1 00:08:02.622 --rc genhtml_legend=1 00:08:02.622 --rc geninfo_all_blocks=1 00:08:02.622 --rc geninfo_unexecuted_blocks=1 00:08:02.622 00:08:02.622 ' 00:08:02.622 08:58:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.622 --rc genhtml_branch_coverage=1 00:08:02.622 --rc genhtml_function_coverage=1 00:08:02.622 --rc genhtml_legend=1 00:08:02.622 --rc geninfo_all_blocks=1 00:08:02.622 --rc geninfo_unexecuted_blocks=1 00:08:02.622 00:08:02.622 ' 00:08:02.622 08:58:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.622 --rc genhtml_branch_coverage=1 00:08:02.622 --rc genhtml_function_coverage=1 00:08:02.622 --rc genhtml_legend=1 00:08:02.622 --rc geninfo_all_blocks=1 00:08:02.622 --rc geninfo_unexecuted_blocks=1 00:08:02.622 00:08:02.622 ' 00:08:02.622 08:58:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.622 --rc genhtml_branch_coverage=1 00:08:02.622 --rc genhtml_function_coverage=1 00:08:02.622 --rc genhtml_legend=1 00:08:02.622 --rc geninfo_all_blocks=1 00:08:02.622 --rc geninfo_unexecuted_blocks=1 00:08:02.622 00:08:02.622 ' 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.622 08:58:39 -- nvmf/common.sh@7 -- # uname -s 00:08:02.622 08:58:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.622 08:58:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.622 08:58:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.622 08:58:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.622 08:58:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.622 08:58:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.622 08:58:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.622 08:58:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.622 08:58:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.622 08:58:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:02.622 08:58:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:02.622 08:58:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.622 08:58:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.622 08:58:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.622 08:58:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.622 08:58:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.622 08:58:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.622 08:58:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.622 08:58:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.622 08:58:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.622 08:58:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.622 08:58:39 -- paths/export.sh@5 -- # export PATH 00:08:02.622 08:58:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.622 08:58:39 -- nvmf/common.sh@46 -- # : 0 00:08:02.622 08:58:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:02.622 08:58:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:02.622 08:58:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:02.622 08:58:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.622 08:58:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.622 08:58:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:02.622 08:58:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:02.622 08:58:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.622 08:58:39 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:02.622 08:58:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:02.622 08:58:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.622 08:58:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:02.622 08:58:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:02.622 08:58:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:02.622 08:58:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.622 08:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.622 08:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.622 08:58:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:02.622 08:58:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:02.622 08:58:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.622 08:58:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.622 08:58:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:02.622 08:58:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:02.622 08:58:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.622 08:58:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.622 08:58:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.622 08:58:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.622 08:58:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.622 08:58:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.622 08:58:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.622 08:58:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.622 08:58:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:02.622 08:58:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:02.622 Cannot find device "nvmf_tgt_br" 00:08:02.622 08:58:39 -- nvmf/common.sh@154 -- # true 00:08:02.622 08:58:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.622 Cannot find device "nvmf_tgt_br2" 00:08:02.622 08:58:39 -- nvmf/common.sh@155 -- # true 00:08:02.622 08:58:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:02.622 08:58:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:02.880 Cannot find device "nvmf_tgt_br" 00:08:02.880 08:58:39 -- nvmf/common.sh@157 -- # true 00:08:02.880 08:58:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:02.880 Cannot find device "nvmf_tgt_br2" 00:08:02.880 08:58:39 -- nvmf/common.sh@158 -- # true 00:08:02.880 08:58:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:02.880 08:58:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:02.880 08:58:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.880 08:58:39 -- nvmf/common.sh@161 -- # true 00:08:02.880 08:58:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.880 08:58:39 -- nvmf/common.sh@162 -- # true 00:08:02.880 08:58:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.880 08:58:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.880 08:58:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.880 08:58:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.880 08:58:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.880 08:58:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.880 08:58:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.880 08:58:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:02.880 08:58:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:02.880 08:58:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:02.880 08:58:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:02.880 08:58:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:02.880 08:58:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:02.880 08:58:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.880 08:58:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.880 08:58:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.880 08:58:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:02.880 08:58:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:02.880 08:58:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.880 08:58:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.880 08:58:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.880 08:58:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.140 08:58:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.140 08:58:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:03.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:08:03.140 00:08:03.140 --- 10.0.0.2 ping statistics --- 00:08:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.140 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:03.140 08:58:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:03.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:03.140 00:08:03.140 --- 10.0.0.3 ping statistics --- 00:08:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.140 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:03.140 08:58:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:03.140 00:08:03.140 --- 10.0.0.1 ping statistics --- 00:08:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.140 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:03.140 08:58:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.140 08:58:39 -- nvmf/common.sh@421 -- # return 0 00:08:03.140 08:58:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.140 08:58:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.140 08:58:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:03.140 08:58:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:03.140 08:58:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.140 08:58:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:03.140 08:58:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:03.140 08:58:39 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:03.140 08:58:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:03.140 08:58:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.140 08:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:03.140 08:58:39 -- nvmf/common.sh@469 -- # nvmfpid=60379 00:08:03.140 08:58:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:03.140 08:58:39 -- nvmf/common.sh@470 -- # waitforlisten 60379 00:08:03.140 08:58:39 -- common/autotest_common.sh@829 -- # '[' -z 60379 ']' 00:08:03.140 08:58:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.140 08:58:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.140 08:58:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.140 08:58:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.140 08:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:03.140 [2024-11-17 08:58:39.914401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.140 [2024-11-17 08:58:39.914497] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.140 [2024-11-17 08:58:40.051422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.399 [2024-11-17 08:58:40.105106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.399 [2024-11-17 08:58:40.105260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.399 [2024-11-17 08:58:40.105272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.399 [2024-11-17 08:58:40.105280] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.399 [2024-11-17 08:58:40.105424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.399 [2024-11-17 08:58:40.105892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.399 [2024-11-17 08:58:40.105922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.333 08:58:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.333 08:58:40 -- common/autotest_common.sh@862 -- # return 0 00:08:04.333 08:58:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:04.333 08:58:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.333 08:58:40 -- common/autotest_common.sh@10 -- # set +x 00:08:04.333 08:58:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.333 08:58:40 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.333 [2024-11-17 08:58:41.202809] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.333 08:58:41 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.592 08:58:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:04.592 08:58:41 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.851 08:58:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:04.851 08:58:41 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:05.418 08:58:42 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:05.418 08:58:42 -- target/nvmf_lvol.sh@29 -- # lvs=cadf2c1d-e744-4f60-aab7-ed85058b0c83 00:08:05.677 08:58:42 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cadf2c1d-e744-4f60-aab7-ed85058b0c83 lvol 20 00:08:05.936 08:58:42 -- target/nvmf_lvol.sh@32 -- # lvol=4d2c3a6d-95c0-4df9-8180-4fd71b0c0977 00:08:05.936 08:58:42 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.936 08:58:42 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d2c3a6d-95c0-4df9-8180-4fd71b0c0977 00:08:06.195 08:58:43 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:06.453 [2024-11-17 08:58:43.309468] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.453 08:58:43 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.712 08:58:43 -- target/nvmf_lvol.sh@42 -- # perf_pid=60460 00:08:06.712 08:58:43 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:06.712 08:58:43 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:08.089 08:58:44 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4d2c3a6d-95c0-4df9-8180-4fd71b0c0977 MY_SNAPSHOT 00:08:08.089 08:58:44 -- target/nvmf_lvol.sh@47 -- # snapshot=cba62b46-73f2-41ab-b94b-c44d1668289f 00:08:08.089 08:58:44 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4d2c3a6d-95c0-4df9-8180-4fd71b0c0977 30 00:08:08.348 08:58:45 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cba62b46-73f2-41ab-b94b-c44d1668289f MY_CLONE 00:08:08.606 08:58:45 -- target/nvmf_lvol.sh@49 -- # clone=b92388f3-4665-446b-b09c-c9f914f877c2 00:08:08.606 08:58:45 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b92388f3-4665-446b-b09c-c9f914f877c2 00:08:09.173 08:58:45 -- target/nvmf_lvol.sh@53 -- # wait 60460 00:08:17.289 Initializing NVMe Controllers 00:08:17.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:17.289 Controller IO queue size 128, less than required. 00:08:17.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:17.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:17.289 Initialization complete. Launching workers. 00:08:17.289 ======================================================== 00:08:17.289 Latency(us) 00:08:17.289 Device Information : IOPS MiB/s Average min max 00:08:17.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9294.84 36.31 13785.09 1791.23 59328.70 00:08:17.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9193.86 35.91 13930.46 2395.31 60857.83 00:08:17.289 ======================================================== 00:08:17.289 Total : 18488.70 72.22 13857.38 1791.23 60857.83 00:08:17.289 00:08:17.289 08:58:53 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.289 08:58:54 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4d2c3a6d-95c0-4df9-8180-4fd71b0c0977 00:08:17.548 08:58:54 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cadf2c1d-e744-4f60-aab7-ed85058b0c83 00:08:17.847 08:58:54 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:17.848 08:58:54 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:17.848 08:58:54 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:17.848 08:58:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:17.848 08:58:54 -- nvmf/common.sh@116 -- # sync 00:08:17.848 08:58:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:17.848 08:58:54 -- nvmf/common.sh@119 -- # set +e 00:08:17.848 08:58:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:17.848 08:58:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:17.848 rmmod nvme_tcp 00:08:17.848 rmmod nvme_fabrics 00:08:17.848 rmmod nvme_keyring 00:08:18.135 08:58:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:18.135 08:58:54 -- nvmf/common.sh@123 -- # set -e 00:08:18.135 08:58:54 -- nvmf/common.sh@124 -- # return 0 00:08:18.135 08:58:54 -- nvmf/common.sh@477 -- # '[' -n 60379 ']' 00:08:18.135 08:58:54 -- nvmf/common.sh@478 -- # killprocess 60379 00:08:18.135 08:58:54 -- common/autotest_common.sh@936 -- # '[' -z 60379 ']' 00:08:18.135 08:58:54 -- common/autotest_common.sh@940 -- # kill -0 60379 00:08:18.135 08:58:54 -- common/autotest_common.sh@941 -- # uname 00:08:18.135 08:58:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.135 08:58:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60379 00:08:18.135 08:58:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:18.135 killing process with pid 60379 00:08:18.135 08:58:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:18.135 08:58:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60379' 00:08:18.135 08:58:54 -- common/autotest_common.sh@955 -- # kill 60379 00:08:18.135 08:58:54 -- common/autotest_common.sh@960 -- # wait 60379 00:08:18.135 08:58:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:18.135 08:58:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:18.135 08:58:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:18.135 08:58:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.135 08:58:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:18.135 08:58:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.135 08:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.135 08:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.394 08:58:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:18.394 00:08:18.394 real 0m15.778s 00:08:18.394 user 1m5.226s 00:08:18.394 sys 0m4.567s 00:08:18.394 08:58:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.394 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.394 ************************************ 00:08:18.394 END TEST nvmf_lvol 00:08:18.394 ************************************ 00:08:18.394 08:58:55 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:18.394 08:58:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.394 08:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.394 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.394 ************************************ 00:08:18.394 START TEST nvmf_lvs_grow 00:08:18.394 ************************************ 00:08:18.394 08:58:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:18.394 * Looking for test storage... 00:08:18.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.394 08:58:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.394 08:58:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.394 08:58:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.394 08:58:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.394 08:58:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.394 08:58:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.394 08:58:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.394 08:58:55 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.394 08:58:55 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.394 08:58:55 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.394 08:58:55 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.394 08:58:55 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.394 08:58:55 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.394 08:58:55 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.394 08:58:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.394 08:58:55 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.394 08:58:55 -- scripts/common.sh@344 -- # : 1 00:08:18.394 08:58:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.394 08:58:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.394 08:58:55 -- scripts/common.sh@364 -- # decimal 1 00:08:18.394 08:58:55 -- scripts/common.sh@352 -- # local d=1 00:08:18.394 08:58:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.394 08:58:55 -- scripts/common.sh@354 -- # echo 1 00:08:18.394 08:58:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.394 08:58:55 -- scripts/common.sh@365 -- # decimal 2 00:08:18.394 08:58:55 -- scripts/common.sh@352 -- # local d=2 00:08:18.394 08:58:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.394 08:58:55 -- scripts/common.sh@354 -- # echo 2 00:08:18.394 08:58:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.394 08:58:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.394 08:58:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.394 08:58:55 -- scripts/common.sh@367 -- # return 0 00:08:18.395 08:58:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.395 08:58:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.395 --rc genhtml_branch_coverage=1 00:08:18.395 --rc genhtml_function_coverage=1 00:08:18.395 --rc genhtml_legend=1 00:08:18.395 --rc geninfo_all_blocks=1 00:08:18.395 --rc geninfo_unexecuted_blocks=1 00:08:18.395 00:08:18.395 ' 00:08:18.395 08:58:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.395 --rc genhtml_branch_coverage=1 00:08:18.395 --rc genhtml_function_coverage=1 00:08:18.395 --rc genhtml_legend=1 00:08:18.395 --rc geninfo_all_blocks=1 00:08:18.395 --rc geninfo_unexecuted_blocks=1 00:08:18.395 00:08:18.395 ' 00:08:18.395 08:58:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.395 --rc genhtml_branch_coverage=1 00:08:18.395 --rc genhtml_function_coverage=1 00:08:18.395 --rc genhtml_legend=1 00:08:18.395 --rc geninfo_all_blocks=1 00:08:18.395 --rc geninfo_unexecuted_blocks=1 00:08:18.395 00:08:18.395 ' 00:08:18.395 08:58:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.395 --rc genhtml_branch_coverage=1 00:08:18.395 --rc genhtml_function_coverage=1 00:08:18.395 --rc genhtml_legend=1 00:08:18.395 --rc geninfo_all_blocks=1 00:08:18.395 --rc geninfo_unexecuted_blocks=1 00:08:18.395 00:08:18.395 ' 00:08:18.395 08:58:55 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.395 08:58:55 -- nvmf/common.sh@7 -- # uname -s 00:08:18.395 08:58:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.395 08:58:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.395 08:58:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.395 08:58:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.395 08:58:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.395 08:58:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.395 08:58:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.395 08:58:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.395 08:58:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.395 08:58:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.395 08:58:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:18.395 08:58:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:18.395 08:58:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.395 08:58:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.395 08:58:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.395 08:58:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.395 08:58:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.395 08:58:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.395 08:58:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.395 08:58:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.395 08:58:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.395 08:58:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.395 08:58:55 -- paths/export.sh@5 -- # export PATH 00:08:18.395 08:58:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.395 08:58:55 -- nvmf/common.sh@46 -- # : 0 00:08:18.395 08:58:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:18.395 08:58:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:18.395 08:58:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:18.395 08:58:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.395 08:58:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.395 08:58:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:18.395 08:58:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:18.395 08:58:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:18.654 08:58:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.654 08:58:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:18.654 08:58:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:18.654 08:58:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:18.654 08:58:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.654 08:58:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:18.654 08:58:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:18.654 08:58:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:18.654 08:58:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.654 08:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.654 08:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.654 08:58:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:18.654 08:58:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:18.654 08:58:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:18.654 08:58:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:18.654 08:58:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:18.654 08:58:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:18.654 08:58:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.654 08:58:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.654 08:58:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:18.654 08:58:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:18.654 08:58:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.654 08:58:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.654 08:58:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.654 08:58:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.654 08:58:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.654 08:58:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.654 08:58:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.654 08:58:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.654 08:58:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:18.655 08:58:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:18.655 Cannot find device "nvmf_tgt_br" 00:08:18.655 08:58:55 -- nvmf/common.sh@154 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.655 Cannot find device "nvmf_tgt_br2" 00:08:18.655 08:58:55 -- nvmf/common.sh@155 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:18.655 08:58:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:18.655 Cannot find device "nvmf_tgt_br" 00:08:18.655 08:58:55 -- nvmf/common.sh@157 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:18.655 Cannot find device "nvmf_tgt_br2" 00:08:18.655 08:58:55 -- nvmf/common.sh@158 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:18.655 08:58:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:18.655 08:58:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.655 08:58:55 -- nvmf/common.sh@161 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.655 08:58:55 -- nvmf/common.sh@162 -- # true 00:08:18.655 08:58:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.655 08:58:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.655 08:58:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.655 08:58:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.655 08:58:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.655 08:58:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:18.655 08:58:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:18.655 08:58:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:18.655 08:58:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:18.655 08:58:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:18.655 08:58:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:18.655 08:58:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:18.655 08:58:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:18.655 08:58:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:18.655 08:58:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:18.915 08:58:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:18.915 08:58:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:18.915 08:58:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:18.915 08:58:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:18.915 08:58:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:18.915 08:58:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:18.915 08:58:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:18.915 08:58:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:18.915 08:58:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:18.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:08:18.915 00:08:18.915 --- 10.0.0.2 ping statistics --- 00:08:18.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.915 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:18.915 08:58:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:18.915 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:18.915 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:18.915 00:08:18.915 --- 10.0.0.3 ping statistics --- 00:08:18.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.915 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:18.915 08:58:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:18.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:08:18.915 00:08:18.915 --- 10.0.0.1 ping statistics --- 00:08:18.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.915 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:18.915 08:58:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.915 08:58:55 -- nvmf/common.sh@421 -- # return 0 00:08:18.915 08:58:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:18.915 08:58:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.915 08:58:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:18.915 08:58:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:18.915 08:58:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.915 08:58:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:18.915 08:58:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:18.915 08:58:55 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:18.915 08:58:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:18.915 08:58:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.915 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 08:58:55 -- nvmf/common.sh@469 -- # nvmfpid=60785 00:08:18.915 08:58:55 -- nvmf/common.sh@470 -- # waitforlisten 60785 00:08:18.915 08:58:55 -- common/autotest_common.sh@829 -- # '[' -z 60785 ']' 00:08:18.915 08:58:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:18.915 08:58:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.915 08:58:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.915 08:58:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.915 08:58:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.915 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 [2024-11-17 08:58:55.742736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.915 [2024-11-17 08:58:55.742833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.174 [2024-11-17 08:58:55.883085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.174 [2024-11-17 08:58:55.938016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.174 [2024-11-17 08:58:55.938385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.174 [2024-11-17 08:58:55.938409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.174 [2024-11-17 08:58:55.938418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.174 [2024-11-17 08:58:55.938456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.112 08:58:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.112 08:58:56 -- common/autotest_common.sh@862 -- # return 0 00:08:20.112 08:58:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:20.112 08:58:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.112 08:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:20.112 08:58:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.112 08:58:56 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.371 [2024-11-17 08:58:57.070190] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:20.371 08:58:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.371 08:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.371 08:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.371 ************************************ 00:08:20.371 START TEST lvs_grow_clean 00:08:20.371 ************************************ 00:08:20.371 08:58:57 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.371 08:58:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.630 08:58:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:20.630 08:58:57 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:20.889 08:58:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:20.889 08:58:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:20.889 08:58:57 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:21.148 08:58:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:21.148 08:58:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:21.148 08:58:57 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f lvol 150 00:08:21.407 08:58:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=94baa792-5851-4cbd-a28b-b5a17695ae94 00:08:21.407 08:58:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:21.407 08:58:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:21.665 [2024-11-17 08:58:58.388405] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:21.665 [2024-11-17 08:58:58.388505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:21.665 true 00:08:21.665 08:58:58 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:21.665 08:58:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:21.923 08:58:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:21.923 08:58:58 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.181 08:58:58 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 94baa792-5851-4cbd-a28b-b5a17695ae94 00:08:22.438 08:58:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.696 [2024-11-17 08:58:59.397050] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.696 08:58:59 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.955 08:58:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60873 00:08:22.955 08:58:59 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:22.955 08:58:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.955 08:58:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60873 /var/tmp/bdevperf.sock 00:08:22.955 08:58:59 -- common/autotest_common.sh@829 -- # '[' -z 60873 ']' 00:08:22.955 08:58:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.955 08:58:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.955 08:58:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.955 08:58:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.955 08:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.955 [2024-11-17 08:58:59.689408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.955 [2024-11-17 08:58:59.689968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60873 ] 00:08:22.955 [2024-11-17 08:58:59.829139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.214 [2024-11-17 08:58:59.885666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.781 08:59:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.781 08:59:00 -- common/autotest_common.sh@862 -- # return 0 00:08:23.781 08:59:00 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.041 Nvme0n1 00:08:24.041 08:59:00 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:24.299 [ 00:08:24.299 { 00:08:24.299 "name": "Nvme0n1", 00:08:24.299 "aliases": [ 00:08:24.299 "94baa792-5851-4cbd-a28b-b5a17695ae94" 00:08:24.299 ], 00:08:24.299 "product_name": "NVMe disk", 00:08:24.299 "block_size": 4096, 00:08:24.299 "num_blocks": 38912, 00:08:24.299 "uuid": "94baa792-5851-4cbd-a28b-b5a17695ae94", 00:08:24.299 "assigned_rate_limits": { 00:08:24.299 "rw_ios_per_sec": 0, 00:08:24.299 "rw_mbytes_per_sec": 0, 00:08:24.299 "r_mbytes_per_sec": 0, 00:08:24.299 "w_mbytes_per_sec": 0 00:08:24.299 }, 00:08:24.299 "claimed": false, 00:08:24.299 "zoned": false, 00:08:24.299 "supported_io_types": { 00:08:24.299 "read": true, 00:08:24.299 "write": true, 00:08:24.299 "unmap": true, 00:08:24.299 "write_zeroes": true, 00:08:24.299 "flush": true, 00:08:24.299 "reset": true, 00:08:24.299 "compare": true, 00:08:24.299 "compare_and_write": true, 00:08:24.299 "abort": true, 00:08:24.299 "nvme_admin": true, 00:08:24.299 "nvme_io": true 00:08:24.299 }, 00:08:24.299 "driver_specific": { 00:08:24.299 "nvme": [ 00:08:24.299 { 00:08:24.299 "trid": { 00:08:24.299 "trtype": "TCP", 00:08:24.299 "adrfam": "IPv4", 00:08:24.299 "traddr": "10.0.0.2", 00:08:24.299 "trsvcid": "4420", 00:08:24.299 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:24.299 }, 00:08:24.299 "ctrlr_data": { 00:08:24.299 "cntlid": 1, 00:08:24.299 "vendor_id": "0x8086", 00:08:24.300 "model_number": "SPDK bdev Controller", 00:08:24.300 "serial_number": "SPDK0", 00:08:24.300 "firmware_revision": "24.01.1", 00:08:24.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.300 "oacs": { 00:08:24.300 "security": 0, 00:08:24.300 "format": 0, 00:08:24.300 "firmware": 0, 00:08:24.300 "ns_manage": 0 00:08:24.300 }, 00:08:24.300 "multi_ctrlr": true, 00:08:24.300 "ana_reporting": false 00:08:24.300 }, 00:08:24.300 "vs": { 00:08:24.300 "nvme_version": "1.3" 00:08:24.300 }, 00:08:24.300 "ns_data": { 00:08:24.300 "id": 1, 00:08:24.300 "can_share": true 00:08:24.300 } 00:08:24.300 } 00:08:24.300 ], 00:08:24.300 "mp_policy": "active_passive" 00:08:24.300 } 00:08:24.300 } 00:08:24.300 ] 00:08:24.300 08:59:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60897 00:08:24.300 08:59:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.300 08:59:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:24.300 Running I/O for 10 seconds... 00:08:25.678 Latency(us) 00:08:25.678 [2024-11-17T08:59:02.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.678 [2024-11-17T08:59:02.608Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.678 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:25.678 [2024-11-17T08:59:02.608Z] =================================================================================================================== 00:08:25.678 [2024-11-17T08:59:02.608Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:25.678 00:08:26.245 08:59:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:26.505 [2024-11-17T08:59:03.435Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.505 Nvme0n1 : 2.00 6339.50 24.76 0.00 0.00 0.00 0.00 0.00 00:08:26.505 [2024-11-17T08:59:03.435Z] =================================================================================================================== 00:08:26.505 [2024-11-17T08:59:03.435Z] Total : 6339.50 24.76 0.00 0.00 0.00 0.00 0.00 00:08:26.505 00:08:26.505 true 00:08:26.505 08:59:03 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:26.505 08:59:03 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.074 08:59:03 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.074 08:59:03 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.074 08:59:03 -- target/nvmf_lvs_grow.sh@65 -- # wait 60897 00:08:27.334 [2024-11-17T08:59:04.264Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.334 Nvme0n1 : 3.00 6343.00 24.78 0.00 0.00 0.00 0.00 0.00 00:08:27.334 [2024-11-17T08:59:04.264Z] =================================================================================================================== 00:08:27.334 [2024-11-17T08:59:04.264Z] Total : 6343.00 24.78 0.00 0.00 0.00 0.00 0.00 00:08:27.334 00:08:28.277 [2024-11-17T08:59:05.207Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.277 Nvme0n1 : 4.00 6376.50 24.91 0.00 0.00 0.00 0.00 0.00 00:08:28.277 [2024-11-17T08:59:05.207Z] =================================================================================================================== 00:08:28.277 [2024-11-17T08:59:05.207Z] Total : 6376.50 24.91 0.00 0.00 0.00 0.00 0.00 00:08:28.277 00:08:29.654 [2024-11-17T08:59:06.584Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.654 Nvme0n1 : 5.00 6339.80 24.76 0.00 0.00 0.00 0.00 0.00 00:08:29.654 [2024-11-17T08:59:06.584Z] =================================================================================================================== 00:08:29.654 [2024-11-17T08:59:06.584Z] Total : 6339.80 24.76 0.00 0.00 0.00 0.00 0.00 00:08:29.654 00:08:30.592 [2024-11-17T08:59:07.522Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.592 Nvme0n1 : 6.00 6341.50 24.77 0.00 0.00 0.00 0.00 0.00 00:08:30.592 [2024-11-17T08:59:07.522Z] =================================================================================================================== 00:08:30.592 [2024-11-17T08:59:07.522Z] Total : 6341.50 24.77 0.00 0.00 0.00 0.00 0.00 00:08:30.592 00:08:31.558 [2024-11-17T08:59:08.488Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.558 Nvme0n1 : 7.00 6324.57 24.71 0.00 0.00 0.00 0.00 0.00 00:08:31.558 [2024-11-17T08:59:08.488Z] =================================================================================================================== 00:08:31.558 [2024-11-17T08:59:08.488Z] Total : 6324.57 24.71 0.00 0.00 0.00 0.00 0.00 00:08:31.558 00:08:32.493 [2024-11-17T08:59:09.423Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.493 Nvme0n1 : 8.00 6311.88 24.66 0.00 0.00 0.00 0.00 0.00 00:08:32.493 [2024-11-17T08:59:09.423Z] =================================================================================================================== 00:08:32.493 [2024-11-17T08:59:09.423Z] Total : 6311.88 24.66 0.00 0.00 0.00 0.00 0.00 00:08:32.493 00:08:33.430 [2024-11-17T08:59:10.360Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.430 Nvme0n1 : 9.00 6287.89 24.56 0.00 0.00 0.00 0.00 0.00 00:08:33.430 [2024-11-17T08:59:10.360Z] =================================================================================================================== 00:08:33.430 [2024-11-17T08:59:10.360Z] Total : 6287.89 24.56 0.00 0.00 0.00 0.00 0.00 00:08:33.430 00:08:34.368 [2024-11-17T08:59:11.298Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.368 Nvme0n1 : 10.00 6281.40 24.54 0.00 0.00 0.00 0.00 0.00 00:08:34.368 [2024-11-17T08:59:11.298Z] =================================================================================================================== 00:08:34.368 [2024-11-17T08:59:11.298Z] Total : 6281.40 24.54 0.00 0.00 0.00 0.00 0.00 00:08:34.368 00:08:34.368 00:08:34.368 Latency(us) 00:08:34.368 [2024-11-17T08:59:11.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.368 [2024-11-17T08:59:11.298Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.368 Nvme0n1 : 10.02 6283.83 24.55 0.00 0.00 20363.94 17396.83 84362.71 00:08:34.368 [2024-11-17T08:59:11.298Z] =================================================================================================================== 00:08:34.368 [2024-11-17T08:59:11.298Z] Total : 6283.83 24.55 0.00 0.00 20363.94 17396.83 84362.71 00:08:34.368 0 00:08:34.368 08:59:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60873 00:08:34.368 08:59:11 -- common/autotest_common.sh@936 -- # '[' -z 60873 ']' 00:08:34.368 08:59:11 -- common/autotest_common.sh@940 -- # kill -0 60873 00:08:34.368 08:59:11 -- common/autotest_common.sh@941 -- # uname 00:08:34.368 08:59:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.368 08:59:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60873 00:08:34.368 killing process with pid 60873 00:08:34.368 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.368 00:08:34.368 Latency(us) 00:08:34.368 [2024-11-17T08:59:11.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.368 [2024-11-17T08:59:11.298Z] =================================================================================================================== 00:08:34.368 [2024-11-17T08:59:11.298Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.368 08:59:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:34.368 08:59:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:34.368 08:59:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60873' 00:08:34.368 08:59:11 -- common/autotest_common.sh@955 -- # kill 60873 00:08:34.368 08:59:11 -- common/autotest_common.sh@960 -- # wait 60873 00:08:34.627 08:59:11 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.886 08:59:11 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:34.886 08:59:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:35.145 08:59:12 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:35.145 08:59:12 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:35.145 08:59:12 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:35.404 [2024-11-17 08:59:12.256282] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:35.404 08:59:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:35.404 08:59:12 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.404 08:59:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:35.404 08:59:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.404 08:59:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.404 08:59:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.404 08:59:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.404 08:59:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.404 08:59:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.404 08:59:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.404 08:59:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:35.404 08:59:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:35.663 request: 00:08:35.663 { 00:08:35.663 "uuid": "7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f", 00:08:35.663 "method": "bdev_lvol_get_lvstores", 00:08:35.663 "req_id": 1 00:08:35.663 } 00:08:35.663 Got JSON-RPC error response 00:08:35.663 response: 00:08:35.663 { 00:08:35.663 "code": -19, 00:08:35.663 "message": "No such device" 00:08:35.663 } 00:08:35.663 08:59:12 -- common/autotest_common.sh@653 -- # es=1 00:08:35.663 08:59:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.663 08:59:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.663 08:59:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.663 08:59:12 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.922 aio_bdev 00:08:35.922 08:59:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 94baa792-5851-4cbd-a28b-b5a17695ae94 00:08:35.922 08:59:12 -- common/autotest_common.sh@897 -- # local bdev_name=94baa792-5851-4cbd-a28b-b5a17695ae94 00:08:35.922 08:59:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:35.922 08:59:12 -- common/autotest_common.sh@899 -- # local i 00:08:35.922 08:59:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:35.922 08:59:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:35.922 08:59:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.181 08:59:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 94baa792-5851-4cbd-a28b-b5a17695ae94 -t 2000 00:08:36.441 [ 00:08:36.441 { 00:08:36.441 "name": "94baa792-5851-4cbd-a28b-b5a17695ae94", 00:08:36.441 "aliases": [ 00:08:36.441 "lvs/lvol" 00:08:36.441 ], 00:08:36.441 "product_name": "Logical Volume", 00:08:36.441 "block_size": 4096, 00:08:36.441 "num_blocks": 38912, 00:08:36.441 "uuid": "94baa792-5851-4cbd-a28b-b5a17695ae94", 00:08:36.441 "assigned_rate_limits": { 00:08:36.441 "rw_ios_per_sec": 0, 00:08:36.441 "rw_mbytes_per_sec": 0, 00:08:36.441 "r_mbytes_per_sec": 0, 00:08:36.441 "w_mbytes_per_sec": 0 00:08:36.441 }, 00:08:36.441 "claimed": false, 00:08:36.441 "zoned": false, 00:08:36.441 "supported_io_types": { 00:08:36.441 "read": true, 00:08:36.441 "write": true, 00:08:36.441 "unmap": true, 00:08:36.441 "write_zeroes": true, 00:08:36.441 "flush": false, 00:08:36.441 "reset": true, 00:08:36.441 "compare": false, 00:08:36.441 "compare_and_write": false, 00:08:36.441 "abort": false, 00:08:36.441 "nvme_admin": false, 00:08:36.441 "nvme_io": false 00:08:36.441 }, 00:08:36.441 "driver_specific": { 00:08:36.441 "lvol": { 00:08:36.441 "lvol_store_uuid": "7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f", 00:08:36.441 "base_bdev": "aio_bdev", 00:08:36.441 "thin_provision": false, 00:08:36.441 "snapshot": false, 00:08:36.441 "clone": false, 00:08:36.441 "esnap_clone": false 00:08:36.441 } 00:08:36.441 } 00:08:36.441 } 00:08:36.441 ] 00:08:36.441 08:59:13 -- common/autotest_common.sh@905 -- # return 0 00:08:36.442 08:59:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:36.442 08:59:13 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:36.701 08:59:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:36.701 08:59:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:36.701 08:59:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:36.961 08:59:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:36.961 08:59:13 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 94baa792-5851-4cbd-a28b-b5a17695ae94 00:08:37.220 08:59:14 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d77b3a4-d60c-4e04-bb88-1ef8ef3e4d9f 00:08:37.478 08:59:14 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.738 08:59:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:37.997 ************************************ 00:08:37.997 END TEST lvs_grow_clean 00:08:37.997 ************************************ 00:08:37.997 00:08:37.997 real 0m17.749s 00:08:37.997 user 0m16.732s 00:08:37.997 sys 0m2.393s 00:08:37.997 08:59:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.997 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:37.997 08:59:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.997 08:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.997 08:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 ************************************ 00:08:37.997 START TEST lvs_grow_dirty 00:08:37.997 ************************************ 00:08:37.997 08:59:14 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:37.997 08:59:14 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.257 08:59:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:38.257 08:59:15 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:38.516 08:59:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:38.516 08:59:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:38.516 08:59:15 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:38.775 08:59:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:38.775 08:59:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:38.775 08:59:15 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da lvol 150 00:08:39.034 08:59:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:39.034 08:59:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:39.035 08:59:15 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:39.293 [2024-11-17 08:59:16.137314] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:39.293 [2024-11-17 08:59:16.137420] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:39.293 true 00:08:39.293 08:59:16 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:39.293 08:59:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:39.552 08:59:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:39.552 08:59:16 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:39.810 08:59:16 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:40.068 08:59:16 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:40.327 08:59:17 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.586 08:59:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61142 00:08:40.586 08:59:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.586 08:59:17 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:40.586 08:59:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61142 /var/tmp/bdevperf.sock 00:08:40.586 08:59:17 -- common/autotest_common.sh@829 -- # '[' -z 61142 ']' 00:08:40.586 08:59:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.586 08:59:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.586 08:59:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.586 08:59:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.586 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:08:40.586 [2024-11-17 08:59:17.392883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.586 [2024-11-17 08:59:17.392985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:08:40.845 [2024-11-17 08:59:17.531126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.845 [2024-11-17 08:59:17.580021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.413 08:59:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.413 08:59:18 -- common/autotest_common.sh@862 -- # return 0 00:08:41.413 08:59:18 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:41.673 Nvme0n1 00:08:41.933 08:59:18 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:41.933 [ 00:08:41.933 { 00:08:41.933 "name": "Nvme0n1", 00:08:41.933 "aliases": [ 00:08:41.933 "fb1fdffd-38d6-48db-a59b-aa34d9116256" 00:08:41.933 ], 00:08:41.933 "product_name": "NVMe disk", 00:08:41.933 "block_size": 4096, 00:08:41.933 "num_blocks": 38912, 00:08:41.933 "uuid": "fb1fdffd-38d6-48db-a59b-aa34d9116256", 00:08:41.933 "assigned_rate_limits": { 00:08:41.933 "rw_ios_per_sec": 0, 00:08:41.933 "rw_mbytes_per_sec": 0, 00:08:41.933 "r_mbytes_per_sec": 0, 00:08:41.933 "w_mbytes_per_sec": 0 00:08:41.933 }, 00:08:41.933 "claimed": false, 00:08:41.933 "zoned": false, 00:08:41.933 "supported_io_types": { 00:08:41.933 "read": true, 00:08:41.933 "write": true, 00:08:41.933 "unmap": true, 00:08:41.933 "write_zeroes": true, 00:08:41.933 "flush": true, 00:08:41.933 "reset": true, 00:08:41.933 "compare": true, 00:08:41.933 "compare_and_write": true, 00:08:41.933 "abort": true, 00:08:41.933 "nvme_admin": true, 00:08:41.933 "nvme_io": true 00:08:41.933 }, 00:08:41.933 "driver_specific": { 00:08:41.933 "nvme": [ 00:08:41.933 { 00:08:41.933 "trid": { 00:08:41.933 "trtype": "TCP", 00:08:41.933 "adrfam": "IPv4", 00:08:41.933 "traddr": "10.0.0.2", 00:08:41.933 "trsvcid": "4420", 00:08:41.933 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:41.933 }, 00:08:41.933 "ctrlr_data": { 00:08:41.933 "cntlid": 1, 00:08:41.933 "vendor_id": "0x8086", 00:08:41.933 "model_number": "SPDK bdev Controller", 00:08:41.933 "serial_number": "SPDK0", 00:08:41.933 "firmware_revision": "24.01.1", 00:08:41.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.933 "oacs": { 00:08:41.933 "security": 0, 00:08:41.933 "format": 0, 00:08:41.933 "firmware": 0, 00:08:41.933 "ns_manage": 0 00:08:41.933 }, 00:08:41.933 "multi_ctrlr": true, 00:08:41.933 "ana_reporting": false 00:08:41.934 }, 00:08:41.934 "vs": { 00:08:41.934 "nvme_version": "1.3" 00:08:41.934 }, 00:08:41.934 "ns_data": { 00:08:41.934 "id": 1, 00:08:41.934 "can_share": true 00:08:41.934 } 00:08:41.934 } 00:08:41.934 ], 00:08:41.934 "mp_policy": "active_passive" 00:08:41.934 } 00:08:41.934 } 00:08:41.934 ] 00:08:41.934 08:59:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61160 00:08:41.934 08:59:18 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:41.934 08:59:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.193 Running I/O for 10 seconds... 00:08:43.130 Latency(us) 00:08:43.130 [2024-11-17T08:59:20.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.130 [2024-11-17T08:59:20.060Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.130 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:43.130 [2024-11-17T08:59:20.060Z] =================================================================================================================== 00:08:43.130 [2024-11-17T08:59:20.060Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:43.130 00:08:44.069 08:59:20 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:44.069 [2024-11-17T08:59:20.999Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.069 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:44.069 [2024-11-17T08:59:20.999Z] =================================================================================================================== 00:08:44.069 [2024-11-17T08:59:20.999Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:44.069 00:08:44.328 true 00:08:44.328 08:59:21 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:44.328 08:59:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:44.594 08:59:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:44.594 08:59:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:44.594 08:59:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 61160 00:08:45.163 [2024-11-17T08:59:22.093Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.163 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:08:45.163 [2024-11-17T08:59:22.093Z] =================================================================================================================== 00:08:45.163 [2024-11-17T08:59:22.093Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:08:45.163 00:08:46.101 [2024-11-17T08:59:23.031Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.101 Nvme0n1 : 4.00 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:08:46.101 [2024-11-17T08:59:23.031Z] =================================================================================================================== 00:08:46.101 [2024-11-17T08:59:23.031Z] Total : 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:08:46.101 00:08:47.039 [2024-11-17T08:59:23.969Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.039 Nvme0n1 : 5.00 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:08:47.039 [2024-11-17T08:59:23.969Z] =================================================================================================================== 00:08:47.039 [2024-11-17T08:59:23.969Z] Total : 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:08:47.039 00:08:48.418 [2024-11-17T08:59:25.348Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.418 Nvme0n1 : 6.00 6371.17 24.89 0.00 0.00 0.00 0.00 0.00 00:08:48.418 [2024-11-17T08:59:25.348Z] =================================================================================================================== 00:08:48.418 [2024-11-17T08:59:25.348Z] Total : 6371.17 24.89 0.00 0.00 0.00 0.00 0.00 00:08:48.418 00:08:49.356 [2024-11-17T08:59:26.286Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.356 Nvme0n1 : 7.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:49.356 [2024-11-17T08:59:26.286Z] =================================================================================================================== 00:08:49.356 [2024-11-17T08:59:26.286Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:49.356 00:08:50.294 [2024-11-17T08:59:27.224Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.294 Nvme0n1 : 8.00 6175.12 24.12 0.00 0.00 0.00 0.00 0.00 00:08:50.294 [2024-11-17T08:59:27.224Z] =================================================================================================================== 00:08:50.294 [2024-11-17T08:59:27.224Z] Total : 6175.12 24.12 0.00 0.00 0.00 0.00 0.00 00:08:50.294 00:08:51.232 [2024-11-17T08:59:28.162Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.232 Nvme0n1 : 9.00 6180.44 24.14 0.00 0.00 0.00 0.00 0.00 00:08:51.232 [2024-11-17T08:59:28.162Z] =================================================================================================================== 00:08:51.232 [2024-11-17T08:59:28.162Z] Total : 6180.44 24.14 0.00 0.00 0.00 0.00 0.00 00:08:51.232 00:08:52.172 [2024-11-17T08:59:29.102Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.172 Nvme0n1 : 10.00 6172.00 24.11 0.00 0.00 0.00 0.00 0.00 00:08:52.172 [2024-11-17T08:59:29.102Z] =================================================================================================================== 00:08:52.172 [2024-11-17T08:59:29.102Z] Total : 6172.00 24.11 0.00 0.00 0.00 0.00 0.00 00:08:52.172 00:08:52.172 00:08:52.172 Latency(us) 00:08:52.172 [2024-11-17T08:59:29.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.172 [2024-11-17T08:59:29.102Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.172 Nvme0n1 : 10.01 6181.53 24.15 0.00 0.00 20701.84 15371.17 263097.25 00:08:52.172 [2024-11-17T08:59:29.102Z] =================================================================================================================== 00:08:52.172 [2024-11-17T08:59:29.102Z] Total : 6181.53 24.15 0.00 0.00 20701.84 15371.17 263097.25 00:08:52.172 0 00:08:52.172 08:59:28 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61142 00:08:52.172 08:59:28 -- common/autotest_common.sh@936 -- # '[' -z 61142 ']' 00:08:52.172 08:59:28 -- common/autotest_common.sh@940 -- # kill -0 61142 00:08:52.172 08:59:28 -- common/autotest_common.sh@941 -- # uname 00:08:52.172 08:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:52.172 08:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61142 00:08:52.172 killing process with pid 61142 00:08:52.172 Received shutdown signal, test time was about 10.000000 seconds 00:08:52.172 00:08:52.172 Latency(us) 00:08:52.172 [2024-11-17T08:59:29.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.172 [2024-11-17T08:59:29.102Z] =================================================================================================================== 00:08:52.172 [2024-11-17T08:59:29.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:52.172 08:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:52.172 08:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:52.172 08:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61142' 00:08:52.172 08:59:28 -- common/autotest_common.sh@955 -- # kill 61142 00:08:52.172 08:59:28 -- common/autotest_common.sh@960 -- # wait 61142 00:08:52.430 08:59:29 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:52.687 08:59:29 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:52.687 08:59:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60785 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@74 -- # wait 60785 00:08:52.945 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60785 Killed "${NVMF_APP[@]}" "$@" 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:52.945 08:59:29 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:52.945 08:59:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.945 08:59:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.945 08:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.945 08:59:29 -- nvmf/common.sh@469 -- # nvmfpid=61292 00:08:52.945 08:59:29 -- nvmf/common.sh@470 -- # waitforlisten 61292 00:08:52.945 08:59:29 -- common/autotest_common.sh@829 -- # '[' -z 61292 ']' 00:08:52.945 08:59:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:52.945 08:59:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.945 08:59:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.945 08:59:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.945 08:59:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.945 08:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.945 [2024-11-17 08:59:29.797975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.945 [2024-11-17 08:59:29.798318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.204 [2024-11-17 08:59:29.940199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.204 [2024-11-17 08:59:29.993695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.204 [2024-11-17 08:59:29.994159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.204 [2024-11-17 08:59:29.994299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.204 [2024-11-17 08:59:29.994439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.204 [2024-11-17 08:59:29.994479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.140 08:59:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.140 08:59:30 -- common/autotest_common.sh@862 -- # return 0 00:08:54.140 08:59:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:54.140 08:59:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.140 08:59:30 -- common/autotest_common.sh@10 -- # set +x 00:08:54.140 08:59:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.140 08:59:30 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.140 [2024-11-17 08:59:31.027957] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.140 [2024-11-17 08:59:31.030293] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.140 [2024-11-17 08:59:31.030486] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.399 08:59:31 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:54.399 08:59:31 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:54.399 08:59:31 -- common/autotest_common.sh@897 -- # local bdev_name=fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:54.399 08:59:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:54.399 08:59:31 -- common/autotest_common.sh@899 -- # local i 00:08:54.399 08:59:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:54.399 08:59:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:54.399 08:59:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.658 08:59:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb1fdffd-38d6-48db-a59b-aa34d9116256 -t 2000 00:08:54.917 [ 00:08:54.917 { 00:08:54.917 "name": "fb1fdffd-38d6-48db-a59b-aa34d9116256", 00:08:54.917 "aliases": [ 00:08:54.917 "lvs/lvol" 00:08:54.917 ], 00:08:54.917 "product_name": "Logical Volume", 00:08:54.917 "block_size": 4096, 00:08:54.917 "num_blocks": 38912, 00:08:54.917 "uuid": "fb1fdffd-38d6-48db-a59b-aa34d9116256", 00:08:54.917 "assigned_rate_limits": { 00:08:54.917 "rw_ios_per_sec": 0, 00:08:54.917 "rw_mbytes_per_sec": 0, 00:08:54.917 "r_mbytes_per_sec": 0, 00:08:54.917 "w_mbytes_per_sec": 0 00:08:54.917 }, 00:08:54.917 "claimed": false, 00:08:54.917 "zoned": false, 00:08:54.917 "supported_io_types": { 00:08:54.917 "read": true, 00:08:54.917 "write": true, 00:08:54.917 "unmap": true, 00:08:54.917 "write_zeroes": true, 00:08:54.917 "flush": false, 00:08:54.917 "reset": true, 00:08:54.917 "compare": false, 00:08:54.917 "compare_and_write": false, 00:08:54.917 "abort": false, 00:08:54.917 "nvme_admin": false, 00:08:54.917 "nvme_io": false 00:08:54.917 }, 00:08:54.917 "driver_specific": { 00:08:54.917 "lvol": { 00:08:54.917 "lvol_store_uuid": "5907bd8b-829d-4ab2-b79d-2559e6d2e2da", 00:08:54.917 "base_bdev": "aio_bdev", 00:08:54.917 "thin_provision": false, 00:08:54.917 "snapshot": false, 00:08:54.917 "clone": false, 00:08:54.917 "esnap_clone": false 00:08:54.918 } 00:08:54.918 } 00:08:54.918 } 00:08:54.918 ] 00:08:54.918 08:59:31 -- common/autotest_common.sh@905 -- # return 0 00:08:54.918 08:59:31 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:54.918 08:59:31 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:55.177 08:59:31 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:55.177 08:59:31 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:55.177 08:59:31 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:55.436 08:59:32 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:55.436 08:59:32 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.695 [2024-11-17 08:59:32.378756] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.695 08:59:32 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:55.695 08:59:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:55.695 08:59:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:55.695 08:59:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.695 08:59:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.695 08:59:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.695 08:59:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.695 08:59:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.695 08:59:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.695 08:59:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.695 08:59:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:55.695 08:59:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:55.954 request: 00:08:55.954 { 00:08:55.954 "uuid": "5907bd8b-829d-4ab2-b79d-2559e6d2e2da", 00:08:55.954 "method": "bdev_lvol_get_lvstores", 00:08:55.954 "req_id": 1 00:08:55.954 } 00:08:55.954 Got JSON-RPC error response 00:08:55.954 response: 00:08:55.954 { 00:08:55.954 "code": -19, 00:08:55.954 "message": "No such device" 00:08:55.954 } 00:08:55.954 08:59:32 -- common/autotest_common.sh@653 -- # es=1 00:08:55.954 08:59:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.954 08:59:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.954 08:59:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.954 08:59:32 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.213 aio_bdev 00:08:56.213 08:59:32 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:56.213 08:59:32 -- common/autotest_common.sh@897 -- # local bdev_name=fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:56.213 08:59:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:56.213 08:59:32 -- common/autotest_common.sh@899 -- # local i 00:08:56.213 08:59:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:56.213 08:59:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:56.213 08:59:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.472 08:59:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb1fdffd-38d6-48db-a59b-aa34d9116256 -t 2000 00:08:56.472 [ 00:08:56.472 { 00:08:56.472 "name": "fb1fdffd-38d6-48db-a59b-aa34d9116256", 00:08:56.472 "aliases": [ 00:08:56.472 "lvs/lvol" 00:08:56.472 ], 00:08:56.472 "product_name": "Logical Volume", 00:08:56.472 "block_size": 4096, 00:08:56.472 "num_blocks": 38912, 00:08:56.472 "uuid": "fb1fdffd-38d6-48db-a59b-aa34d9116256", 00:08:56.472 "assigned_rate_limits": { 00:08:56.472 "rw_ios_per_sec": 0, 00:08:56.472 "rw_mbytes_per_sec": 0, 00:08:56.472 "r_mbytes_per_sec": 0, 00:08:56.472 "w_mbytes_per_sec": 0 00:08:56.472 }, 00:08:56.472 "claimed": false, 00:08:56.472 "zoned": false, 00:08:56.472 "supported_io_types": { 00:08:56.472 "read": true, 00:08:56.472 "write": true, 00:08:56.472 "unmap": true, 00:08:56.472 "write_zeroes": true, 00:08:56.472 "flush": false, 00:08:56.472 "reset": true, 00:08:56.472 "compare": false, 00:08:56.472 "compare_and_write": false, 00:08:56.472 "abort": false, 00:08:56.472 "nvme_admin": false, 00:08:56.472 "nvme_io": false 00:08:56.472 }, 00:08:56.472 "driver_specific": { 00:08:56.472 "lvol": { 00:08:56.472 "lvol_store_uuid": "5907bd8b-829d-4ab2-b79d-2559e6d2e2da", 00:08:56.472 "base_bdev": "aio_bdev", 00:08:56.472 "thin_provision": false, 00:08:56.472 "snapshot": false, 00:08:56.472 "clone": false, 00:08:56.472 "esnap_clone": false 00:08:56.472 } 00:08:56.472 } 00:08:56.472 } 00:08:56.472 ] 00:08:56.472 08:59:33 -- common/autotest_common.sh@905 -- # return 0 00:08:56.472 08:59:33 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:56.472 08:59:33 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:56.740 08:59:33 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:56.740 08:59:33 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:56.740 08:59:33 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:57.009 08:59:33 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:57.009 08:59:33 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb1fdffd-38d6-48db-a59b-aa34d9116256 00:08:57.268 08:59:34 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5907bd8b-829d-4ab2-b79d-2559e6d2e2da 00:08:57.527 08:59:34 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.786 08:59:34 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:58.045 ************************************ 00:08:58.045 END TEST lvs_grow_dirty 00:08:58.045 ************************************ 00:08:58.045 00:08:58.045 real 0m20.044s 00:08:58.045 user 0m39.852s 00:08:58.045 sys 0m9.200s 00:08:58.045 08:59:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.045 08:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:58.303 08:59:34 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.303 08:59:34 -- common/autotest_common.sh@806 -- # type=--id 00:08:58.303 08:59:34 -- common/autotest_common.sh@807 -- # id=0 00:08:58.303 08:59:34 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:58.303 08:59:34 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.303 08:59:35 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:58.303 08:59:35 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:58.303 08:59:35 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:58.303 08:59:35 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.303 nvmf_trace.0 00:08:58.303 08:59:35 -- common/autotest_common.sh@821 -- # return 0 00:08:58.303 08:59:35 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.303 08:59:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:58.303 08:59:35 -- nvmf/common.sh@116 -- # sync 00:08:58.562 08:59:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:58.562 08:59:35 -- nvmf/common.sh@119 -- # set +e 00:08:58.562 08:59:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:58.562 08:59:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:58.562 rmmod nvme_tcp 00:08:58.562 rmmod nvme_fabrics 00:08:58.562 rmmod nvme_keyring 00:08:58.821 08:59:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:58.821 08:59:35 -- nvmf/common.sh@123 -- # set -e 00:08:58.821 08:59:35 -- nvmf/common.sh@124 -- # return 0 00:08:58.821 08:59:35 -- nvmf/common.sh@477 -- # '[' -n 61292 ']' 00:08:58.821 08:59:35 -- nvmf/common.sh@478 -- # killprocess 61292 00:08:58.821 08:59:35 -- common/autotest_common.sh@936 -- # '[' -z 61292 ']' 00:08:58.821 08:59:35 -- common/autotest_common.sh@940 -- # kill -0 61292 00:08:58.821 08:59:35 -- common/autotest_common.sh@941 -- # uname 00:08:58.821 08:59:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:58.821 08:59:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61292 00:08:58.822 08:59:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:58.822 08:59:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:58.822 killing process with pid 61292 00:08:58.822 08:59:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61292' 00:08:58.822 08:59:35 -- common/autotest_common.sh@955 -- # kill 61292 00:08:58.822 08:59:35 -- common/autotest_common.sh@960 -- # wait 61292 00:08:58.822 08:59:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:58.822 08:59:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:58.822 08:59:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:58.822 08:59:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.822 08:59:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:58.822 08:59:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.822 08:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.822 08:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.081 08:59:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:59.081 00:08:59.081 real 0m40.634s 00:08:59.081 user 1m3.326s 00:08:59.081 sys 0m12.549s 00:08:59.081 08:59:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.081 ************************************ 00:08:59.081 08:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:59.081 END TEST nvmf_lvs_grow 00:08:59.081 ************************************ 00:08:59.081 08:59:35 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.081 08:59:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.081 08:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.081 08:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:59.081 ************************************ 00:08:59.081 START TEST nvmf_bdev_io_wait 00:08:59.081 ************************************ 00:08:59.081 08:59:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.081 * Looking for test storage... 00:08:59.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:59.081 08:59:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:59.081 08:59:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:59.081 08:59:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:59.081 08:59:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:59.081 08:59:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:59.081 08:59:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:59.081 08:59:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:59.081 08:59:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:59.081 08:59:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:59.081 08:59:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.081 08:59:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:59.081 08:59:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:59.081 08:59:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:59.081 08:59:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:59.081 08:59:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:59.081 08:59:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:59.081 08:59:35 -- scripts/common.sh@344 -- # : 1 00:08:59.081 08:59:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:59.081 08:59:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.081 08:59:35 -- scripts/common.sh@364 -- # decimal 1 00:08:59.081 08:59:35 -- scripts/common.sh@352 -- # local d=1 00:08:59.081 08:59:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.081 08:59:35 -- scripts/common.sh@354 -- # echo 1 00:08:59.081 08:59:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:59.081 08:59:35 -- scripts/common.sh@365 -- # decimal 2 00:08:59.081 08:59:35 -- scripts/common.sh@352 -- # local d=2 00:08:59.081 08:59:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.081 08:59:35 -- scripts/common.sh@354 -- # echo 2 00:08:59.081 08:59:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:59.081 08:59:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:59.081 08:59:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:59.081 08:59:35 -- scripts/common.sh@367 -- # return 0 00:08:59.081 08:59:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.081 08:59:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.082 --rc genhtml_branch_coverage=1 00:08:59.082 --rc genhtml_function_coverage=1 00:08:59.082 --rc genhtml_legend=1 00:08:59.082 --rc geninfo_all_blocks=1 00:08:59.082 --rc geninfo_unexecuted_blocks=1 00:08:59.082 00:08:59.082 ' 00:08:59.082 08:59:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.082 --rc genhtml_branch_coverage=1 00:08:59.082 --rc genhtml_function_coverage=1 00:08:59.082 --rc genhtml_legend=1 00:08:59.082 --rc geninfo_all_blocks=1 00:08:59.082 --rc geninfo_unexecuted_blocks=1 00:08:59.082 00:08:59.082 ' 00:08:59.082 08:59:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.082 --rc genhtml_branch_coverage=1 00:08:59.082 --rc genhtml_function_coverage=1 00:08:59.082 --rc genhtml_legend=1 00:08:59.082 --rc geninfo_all_blocks=1 00:08:59.082 --rc geninfo_unexecuted_blocks=1 00:08:59.082 00:08:59.082 ' 00:08:59.082 08:59:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.082 --rc genhtml_branch_coverage=1 00:08:59.082 --rc genhtml_function_coverage=1 00:08:59.082 --rc genhtml_legend=1 00:08:59.082 --rc geninfo_all_blocks=1 00:08:59.082 --rc geninfo_unexecuted_blocks=1 00:08:59.082 00:08:59.082 ' 00:08:59.082 08:59:35 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:59.082 08:59:35 -- nvmf/common.sh@7 -- # uname -s 00:08:59.082 08:59:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.082 08:59:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.082 08:59:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.082 08:59:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.082 08:59:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.082 08:59:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.082 08:59:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.082 08:59:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.082 08:59:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.082 08:59:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.341 08:59:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:59.341 08:59:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:08:59.341 08:59:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.341 08:59:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.341 08:59:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:59.341 08:59:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.341 08:59:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.341 08:59:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.341 08:59:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.341 08:59:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.341 08:59:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.341 08:59:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.341 08:59:36 -- paths/export.sh@5 -- # export PATH 00:08:59.341 08:59:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.341 08:59:36 -- nvmf/common.sh@46 -- # : 0 00:08:59.341 08:59:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:59.341 08:59:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:59.341 08:59:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:59.341 08:59:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.341 08:59:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.341 08:59:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:59.341 08:59:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:59.341 08:59:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:59.341 08:59:36 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.341 08:59:36 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.341 08:59:36 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:59.341 08:59:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:59.341 08:59:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.341 08:59:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:59.341 08:59:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:59.341 08:59:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:59.342 08:59:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.342 08:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.342 08:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.342 08:59:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:59.342 08:59:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:59.342 08:59:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:59.342 08:59:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:59.342 08:59:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:59.342 08:59:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:59.342 08:59:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.342 08:59:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.342 08:59:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:59.342 08:59:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:59.342 08:59:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:59.342 08:59:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:59.342 08:59:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:59.342 08:59:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.342 08:59:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:59.342 08:59:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:59.342 08:59:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:59.342 08:59:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:59.342 08:59:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:59.342 08:59:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:59.342 Cannot find device "nvmf_tgt_br" 00:08:59.342 08:59:36 -- nvmf/common.sh@154 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.342 Cannot find device "nvmf_tgt_br2" 00:08:59.342 08:59:36 -- nvmf/common.sh@155 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:59.342 08:59:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:59.342 Cannot find device "nvmf_tgt_br" 00:08:59.342 08:59:36 -- nvmf/common.sh@157 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:59.342 Cannot find device "nvmf_tgt_br2" 00:08:59.342 08:59:36 -- nvmf/common.sh@158 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:59.342 08:59:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:59.342 08:59:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.342 08:59:36 -- nvmf/common.sh@161 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.342 08:59:36 -- nvmf/common.sh@162 -- # true 00:08:59.342 08:59:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.342 08:59:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.342 08:59:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.342 08:59:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.342 08:59:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.342 08:59:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.342 08:59:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.342 08:59:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:59.342 08:59:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:59.342 08:59:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:59.342 08:59:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:59.342 08:59:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:59.342 08:59:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:59.342 08:59:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.342 08:59:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.342 08:59:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.342 08:59:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:59.342 08:59:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:59.342 08:59:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.601 08:59:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.601 08:59:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.601 08:59:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.601 08:59:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.601 08:59:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:59.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:08:59.601 00:08:59.601 --- 10.0.0.2 ping statistics --- 00:08:59.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.601 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:59.601 08:59:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:59.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:08:59.601 00:08:59.601 --- 10.0.0.3 ping statistics --- 00:08:59.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.601 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:59.601 08:59:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:59.601 00:08:59.601 --- 10.0.0.1 ping statistics --- 00:08:59.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.601 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:59.601 08:59:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.601 08:59:36 -- nvmf/common.sh@421 -- # return 0 00:08:59.601 08:59:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.601 08:59:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.601 08:59:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:59.601 08:59:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:59.601 08:59:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.601 08:59:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:59.601 08:59:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:59.601 08:59:36 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:59.601 08:59:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.601 08:59:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.601 08:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:59.601 08:59:36 -- nvmf/common.sh@469 -- # nvmfpid=61612 00:08:59.601 08:59:36 -- nvmf/common.sh@470 -- # waitforlisten 61612 00:08:59.601 08:59:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:59.601 08:59:36 -- common/autotest_common.sh@829 -- # '[' -z 61612 ']' 00:08:59.601 08:59:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.601 08:59:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.601 08:59:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.601 08:59:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.601 08:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:59.601 [2024-11-17 08:59:36.409849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.601 [2024-11-17 08:59:36.409939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.860 [2024-11-17 08:59:36.552974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.860 [2024-11-17 08:59:36.627830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.860 [2024-11-17 08:59:36.628379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.860 [2024-11-17 08:59:36.628633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.860 [2024-11-17 08:59:36.628946] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.860 [2024-11-17 08:59:36.629353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.860 [2024-11-17 08:59:36.629430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.860 [2024-11-17 08:59:36.629927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.860 [2024-11-17 08:59:36.629500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.791 08:59:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.791 08:59:37 -- common/autotest_common.sh@862 -- # return 0 00:09:00.791 08:59:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:00.791 08:59:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 08:59:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 [2024-11-17 08:59:37.515639] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 Malloc0 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.791 08:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.791 08:59:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.791 [2024-11-17 08:59:37.572890] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.791 08:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61647 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@30 -- # READ_PID=61649 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:00.791 08:59:37 -- nvmf/common.sh@520 -- # config=() 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61651 00:09:00.791 08:59:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:00.791 08:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:00.791 08:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:00.791 { 00:09:00.791 "params": { 00:09:00.791 "name": "Nvme$subsystem", 00:09:00.791 "trtype": "$TEST_TRANSPORT", 00:09:00.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.791 "adrfam": "ipv4", 00:09:00.791 "trsvcid": "$NVMF_PORT", 00:09:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.791 "hdgst": ${hdgst:-false}, 00:09:00.791 "ddgst": ${ddgst:-false} 00:09:00.791 }, 00:09:00.791 "method": "bdev_nvme_attach_controller" 00:09:00.791 } 00:09:00.791 EOF 00:09:00.791 )") 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:00.791 08:59:37 -- nvmf/common.sh@520 -- # config=() 00:09:00.791 08:59:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:00.791 08:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:00.791 08:59:37 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:00.791 08:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:00.791 { 00:09:00.791 "params": { 00:09:00.791 "name": "Nvme$subsystem", 00:09:00.791 "trtype": "$TEST_TRANSPORT", 00:09:00.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.791 "adrfam": "ipv4", 00:09:00.791 "trsvcid": "$NVMF_PORT", 00:09:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.791 "hdgst": ${hdgst:-false}, 00:09:00.791 "ddgst": ${ddgst:-false} 00:09:00.791 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 } 00:09:00.792 EOF 00:09:00.792 )") 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # cat 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # cat 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:00.792 08:59:37 -- nvmf/common.sh@520 -- # config=() 00:09:00.792 08:59:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:00.792 08:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:00.792 { 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme$subsystem", 00:09:00.792 "trtype": "$TEST_TRANSPORT", 00:09:00.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "$NVMF_PORT", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.792 "hdgst": ${hdgst:-false}, 00:09:00.792 "ddgst": ${ddgst:-false} 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 } 00:09:00.792 EOF 00:09:00.792 )") 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61652 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@35 -- # sync 00:09:00.792 08:59:37 -- nvmf/common.sh@520 -- # config=() 00:09:00.792 08:59:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:00.792 08:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:00.792 { 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme$subsystem", 00:09:00.792 "trtype": "$TEST_TRANSPORT", 00:09:00.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "$NVMF_PORT", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.792 "hdgst": ${hdgst:-false}, 00:09:00.792 "ddgst": ${ddgst:-false} 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 } 00:09:00.792 EOF 00:09:00.792 )") 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # cat 00:09:00.792 08:59:37 -- nvmf/common.sh@544 -- # jq . 00:09:00.792 08:59:37 -- nvmf/common.sh@544 -- # jq . 00:09:00.792 08:59:37 -- nvmf/common.sh@542 -- # cat 00:09:00.792 08:59:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:00.792 08:59:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme1", 00:09:00.792 "trtype": "tcp", 00:09:00.792 "traddr": "10.0.0.2", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "4420", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.792 "hdgst": false, 00:09:00.792 "ddgst": false 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 }' 00:09:00.792 08:59:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:00.792 08:59:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme1", 00:09:00.792 "trtype": "tcp", 00:09:00.792 "traddr": "10.0.0.2", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "4420", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.792 "hdgst": false, 00:09:00.792 "ddgst": false 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 }' 00:09:00.792 08:59:37 -- nvmf/common.sh@544 -- # jq . 00:09:00.792 08:59:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:00.792 08:59:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme1", 00:09:00.792 "trtype": "tcp", 00:09:00.792 "traddr": "10.0.0.2", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "4420", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.792 "hdgst": false, 00:09:00.792 "ddgst": false 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 }' 00:09:00.792 08:59:37 -- nvmf/common.sh@544 -- # jq . 00:09:00.792 08:59:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:00.792 08:59:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:00.792 "params": { 00:09:00.792 "name": "Nvme1", 00:09:00.792 "trtype": "tcp", 00:09:00.792 "traddr": "10.0.0.2", 00:09:00.792 "adrfam": "ipv4", 00:09:00.792 "trsvcid": "4420", 00:09:00.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.792 "hdgst": false, 00:09:00.792 "ddgst": false 00:09:00.792 }, 00:09:00.792 "method": "bdev_nvme_attach_controller" 00:09:00.792 }' 00:09:00.792 [2024-11-17 08:59:37.623976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.792 [2024-11-17 08:59:37.624055] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:00.792 [2024-11-17 08:59:37.630918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.792 [2024-11-17 08:59:37.630997] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:00.792 [2024-11-17 08:59:37.637836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.792 [2024-11-17 08:59:37.640338] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:00.792 08:59:37 -- target/bdev_io_wait.sh@37 -- # wait 61647 00:09:00.792 [2024-11-17 08:59:37.667935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.792 [2024-11-17 08:59:37.668033] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:01.050 [2024-11-17 08:59:37.802500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.050 [2024-11-17 08:59:37.846510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.050 [2024-11-17 08:59:37.856468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.050 [2024-11-17 08:59:37.888191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.050 [2024-11-17 08:59:37.889231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:01.050 [2024-11-17 08:59:37.931195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.050 [2024-11-17 08:59:37.940692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:01.050 Running I/O for 1 seconds... 00:09:01.308 [2024-11-17 08:59:37.984393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:01.308 Running I/O for 1 seconds... 00:09:01.308 Running I/O for 1 seconds... 00:09:01.308 Running I/O for 1 seconds... 00:09:02.242 00:09:02.242 Latency(us) 00:09:02.242 [2024-11-17T08:59:39.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.242 [2024-11-17T08:59:39.172Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:02.242 Nvme1n1 : 1.02 6141.04 23.99 0.00 0.00 20552.56 8281.37 35985.22 00:09:02.242 [2024-11-17T08:59:39.172Z] =================================================================================================================== 00:09:02.242 [2024-11-17T08:59:39.172Z] Total : 6141.04 23.99 0.00 0.00 20552.56 8281.37 35985.22 00:09:02.242 00:09:02.242 Latency(us) 00:09:02.242 [2024-11-17T08:59:39.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.242 [2024-11-17T08:59:39.172Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:02.242 Nvme1n1 : 1.00 175202.97 684.39 0.00 0.00 727.86 344.44 942.08 00:09:02.242 [2024-11-17T08:59:39.172Z] =================================================================================================================== 00:09:02.242 [2024-11-17T08:59:39.172Z] Total : 175202.97 684.39 0.00 0.00 727.86 344.44 942.08 00:09:02.242 00:09:02.242 Latency(us) 00:09:02.242 [2024-11-17T08:59:39.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.242 [2024-11-17T08:59:39.172Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:02.242 Nvme1n1 : 1.01 9735.14 38.03 0.00 0.00 13094.94 7149.38 27167.65 00:09:02.242 [2024-11-17T08:59:39.172Z] =================================================================================================================== 00:09:02.242 [2024-11-17T08:59:39.172Z] Total : 9735.14 38.03 0.00 0.00 13094.94 7149.38 27167.65 00:09:02.242 00:09:02.242 Latency(us) 00:09:02.242 [2024-11-17T08:59:39.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.242 [2024-11-17T08:59:39.172Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:02.242 Nvme1n1 : 1.01 6045.49 23.62 0.00 0.00 21093.43 6762.12 48139.17 00:09:02.242 [2024-11-17T08:59:39.172Z] =================================================================================================================== 00:09:02.242 [2024-11-17T08:59:39.172Z] Total : 6045.49 23.62 0.00 0.00 21093.43 6762.12 48139.17 00:09:02.499 08:59:39 -- target/bdev_io_wait.sh@38 -- # wait 61649 00:09:02.499 08:59:39 -- target/bdev_io_wait.sh@39 -- # wait 61651 00:09:02.499 08:59:39 -- target/bdev_io_wait.sh@40 -- # wait 61652 00:09:02.499 08:59:39 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.499 08:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.500 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.500 08:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.500 08:59:39 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:02.500 08:59:39 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:02.500 08:59:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:02.500 08:59:39 -- nvmf/common.sh@116 -- # sync 00:09:02.500 08:59:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:02.500 08:59:39 -- nvmf/common.sh@119 -- # set +e 00:09:02.500 08:59:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:02.500 08:59:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:02.500 rmmod nvme_tcp 00:09:02.500 rmmod nvme_fabrics 00:09:02.500 rmmod nvme_keyring 00:09:02.500 08:59:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:02.500 08:59:39 -- nvmf/common.sh@123 -- # set -e 00:09:02.500 08:59:39 -- nvmf/common.sh@124 -- # return 0 00:09:02.500 08:59:39 -- nvmf/common.sh@477 -- # '[' -n 61612 ']' 00:09:02.500 08:59:39 -- nvmf/common.sh@478 -- # killprocess 61612 00:09:02.500 08:59:39 -- common/autotest_common.sh@936 -- # '[' -z 61612 ']' 00:09:02.500 08:59:39 -- common/autotest_common.sh@940 -- # kill -0 61612 00:09:02.500 08:59:39 -- common/autotest_common.sh@941 -- # uname 00:09:02.500 08:59:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.500 08:59:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61612 00:09:02.757 08:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:02.757 08:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:02.757 killing process with pid 61612 00:09:02.757 08:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61612' 00:09:02.757 08:59:39 -- common/autotest_common.sh@955 -- # kill 61612 00:09:02.757 08:59:39 -- common/autotest_common.sh@960 -- # wait 61612 00:09:02.757 08:59:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:02.757 08:59:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:02.757 08:59:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:02.757 08:59:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.757 08:59:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:02.757 08:59:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.757 08:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.757 08:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.757 08:59:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:02.757 00:09:02.757 real 0m3.832s 00:09:02.757 user 0m16.571s 00:09:02.757 sys 0m1.871s 00:09:02.757 08:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.757 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.757 ************************************ 00:09:02.757 END TEST nvmf_bdev_io_wait 00:09:02.757 ************************************ 00:09:03.017 08:59:39 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:03.017 08:59:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.017 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:09:03.017 ************************************ 00:09:03.017 START TEST nvmf_queue_depth 00:09:03.017 ************************************ 00:09:03.017 08:59:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:03.017 * Looking for test storage... 00:09:03.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.017 08:59:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:03.017 08:59:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:03.017 08:59:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.017 08:59:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.017 08:59:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.017 08:59:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.017 08:59:39 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.017 08:59:39 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.017 08:59:39 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.017 08:59:39 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.017 08:59:39 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.017 08:59:39 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.017 08:59:39 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.017 08:59:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.017 08:59:39 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.017 08:59:39 -- scripts/common.sh@344 -- # : 1 00:09:03.017 08:59:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.017 08:59:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.017 08:59:39 -- scripts/common.sh@364 -- # decimal 1 00:09:03.017 08:59:39 -- scripts/common.sh@352 -- # local d=1 00:09:03.017 08:59:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.017 08:59:39 -- scripts/common.sh@354 -- # echo 1 00:09:03.017 08:59:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.017 08:59:39 -- scripts/common.sh@365 -- # decimal 2 00:09:03.017 08:59:39 -- scripts/common.sh@352 -- # local d=2 00:09:03.017 08:59:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.017 08:59:39 -- scripts/common.sh@354 -- # echo 2 00:09:03.017 08:59:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.017 08:59:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.017 08:59:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.017 08:59:39 -- scripts/common.sh@367 -- # return 0 00:09:03.017 08:59:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.017 --rc genhtml_branch_coverage=1 00:09:03.017 --rc genhtml_function_coverage=1 00:09:03.017 --rc genhtml_legend=1 00:09:03.017 --rc geninfo_all_blocks=1 00:09:03.017 --rc geninfo_unexecuted_blocks=1 00:09:03.017 00:09:03.017 ' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.017 --rc genhtml_branch_coverage=1 00:09:03.017 --rc genhtml_function_coverage=1 00:09:03.017 --rc genhtml_legend=1 00:09:03.017 --rc geninfo_all_blocks=1 00:09:03.017 --rc geninfo_unexecuted_blocks=1 00:09:03.017 00:09:03.017 ' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.017 --rc genhtml_branch_coverage=1 00:09:03.017 --rc genhtml_function_coverage=1 00:09:03.017 --rc genhtml_legend=1 00:09:03.017 --rc geninfo_all_blocks=1 00:09:03.017 --rc geninfo_unexecuted_blocks=1 00:09:03.017 00:09:03.017 ' 00:09:03.017 08:59:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.017 --rc genhtml_branch_coverage=1 00:09:03.017 --rc genhtml_function_coverage=1 00:09:03.017 --rc genhtml_legend=1 00:09:03.017 --rc geninfo_all_blocks=1 00:09:03.017 --rc geninfo_unexecuted_blocks=1 00:09:03.017 00:09:03.017 ' 00:09:03.017 08:59:39 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.017 08:59:39 -- nvmf/common.sh@7 -- # uname -s 00:09:03.017 08:59:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.017 08:59:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.017 08:59:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.017 08:59:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.017 08:59:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.017 08:59:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.017 08:59:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.017 08:59:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.017 08:59:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.017 08:59:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.017 08:59:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:03.017 08:59:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:03.017 08:59:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.017 08:59:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.017 08:59:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.017 08:59:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.017 08:59:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.017 08:59:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.017 08:59:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.017 08:59:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.017 08:59:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.018 08:59:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.018 08:59:39 -- paths/export.sh@5 -- # export PATH 00:09:03.018 08:59:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.018 08:59:39 -- nvmf/common.sh@46 -- # : 0 00:09:03.018 08:59:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:03.018 08:59:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:03.018 08:59:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:03.018 08:59:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.018 08:59:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.018 08:59:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:03.018 08:59:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:03.018 08:59:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:03.018 08:59:39 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:03.018 08:59:39 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:03.018 08:59:39 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:03.018 08:59:39 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:03.018 08:59:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:03.018 08:59:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.018 08:59:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:03.018 08:59:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:03.018 08:59:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:03.018 08:59:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.018 08:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.018 08:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.018 08:59:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:03.018 08:59:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:03.018 08:59:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:03.018 08:59:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:03.018 08:59:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:03.018 08:59:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:03.018 08:59:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.018 08:59:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.018 08:59:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:03.018 08:59:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:03.018 08:59:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.018 08:59:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.018 08:59:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.018 08:59:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.018 08:59:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.018 08:59:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.018 08:59:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.018 08:59:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.018 08:59:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:03.018 08:59:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:03.018 Cannot find device "nvmf_tgt_br" 00:09:03.018 08:59:39 -- nvmf/common.sh@154 -- # true 00:09:03.018 08:59:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.018 Cannot find device "nvmf_tgt_br2" 00:09:03.018 08:59:39 -- nvmf/common.sh@155 -- # true 00:09:03.018 08:59:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:03.018 08:59:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:03.278 Cannot find device "nvmf_tgt_br" 00:09:03.278 08:59:39 -- nvmf/common.sh@157 -- # true 00:09:03.278 08:59:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:03.278 Cannot find device "nvmf_tgt_br2" 00:09:03.278 08:59:39 -- nvmf/common.sh@158 -- # true 00:09:03.278 08:59:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:03.278 08:59:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:03.278 08:59:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.278 08:59:40 -- nvmf/common.sh@161 -- # true 00:09:03.278 08:59:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.278 08:59:40 -- nvmf/common.sh@162 -- # true 00:09:03.278 08:59:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.278 08:59:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.278 08:59:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.278 08:59:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.278 08:59:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.278 08:59:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.278 08:59:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.278 08:59:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.278 08:59:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.278 08:59:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:03.278 08:59:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:03.278 08:59:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:03.278 08:59:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:03.278 08:59:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.278 08:59:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.278 08:59:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.278 08:59:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:03.278 08:59:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:03.278 08:59:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.278 08:59:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.278 08:59:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.278 08:59:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.278 08:59:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.278 08:59:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:03.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:03.278 00:09:03.278 --- 10.0.0.2 ping statistics --- 00:09:03.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.278 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:03.278 08:59:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:03.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:03.278 00:09:03.278 --- 10.0.0.3 ping statistics --- 00:09:03.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.278 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:03.278 08:59:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:03.278 00:09:03.278 --- 10.0.0.1 ping statistics --- 00:09:03.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.278 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:03.278 08:59:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.278 08:59:40 -- nvmf/common.sh@421 -- # return 0 00:09:03.278 08:59:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:03.278 08:59:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.278 08:59:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:03.278 08:59:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:03.278 08:59:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.278 08:59:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:03.278 08:59:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:03.538 08:59:40 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.538 08:59:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:03.538 08:59:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.538 08:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.538 08:59:40 -- nvmf/common.sh@469 -- # nvmfpid=61893 00:09:03.538 08:59:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.538 08:59:40 -- nvmf/common.sh@470 -- # waitforlisten 61893 00:09:03.538 08:59:40 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:09:03.538 08:59:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.538 08:59:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.538 08:59:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.538 08:59:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.538 08:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:03.538 [2024-11-17 08:59:40.264192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.538 [2024-11-17 08:59:40.264306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.538 [2024-11-17 08:59:40.399403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.538 [2024-11-17 08:59:40.449184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.538 [2024-11-17 08:59:40.449345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.538 [2024-11-17 08:59:40.449357] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.538 [2024-11-17 08:59:40.449364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.538 [2024-11-17 08:59:40.449387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.476 08:59:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.476 08:59:41 -- common/autotest_common.sh@862 -- # return 0 00:09:04.476 08:59:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:04.476 08:59:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.476 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.476 08:59:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.476 08:59:41 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.476 08:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.476 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.476 [2024-11-17 08:59:41.349257] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.476 08:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.476 08:59:41 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.476 08:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.476 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.476 Malloc0 00:09:04.476 08:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.476 08:59:41 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:04.476 08:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.476 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.476 08:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.476 08:59:41 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.476 08:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.476 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.735 08:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.735 08:59:41 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.735 08:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.735 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.735 [2024-11-17 08:59:41.407647] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.735 08:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.735 08:59:41 -- target/queue_depth.sh@30 -- # bdevperf_pid=61925 00:09:04.735 08:59:41 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.735 08:59:41 -- target/queue_depth.sh@33 -- # waitforlisten 61925 /var/tmp/bdevperf.sock 00:09:04.735 08:59:41 -- common/autotest_common.sh@829 -- # '[' -z 61925 ']' 00:09:04.735 08:59:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.735 08:59:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.735 08:59:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.735 08:59:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.735 08:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.735 08:59:41 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:04.735 [2024-11-17 08:59:41.466359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.735 [2024-11-17 08:59:41.466456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61925 ] 00:09:04.735 [2024-11-17 08:59:41.608796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.994 [2024-11-17 08:59:41.677108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.561 08:59:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.561 08:59:42 -- common/autotest_common.sh@862 -- # return 0 00:09:05.561 08:59:42 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:05.561 08:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.561 08:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:05.820 NVMe0n1 00:09:05.820 08:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.820 08:59:42 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.820 Running I/O for 10 seconds... 00:09:15.811 00:09:15.811 Latency(us) 00:09:15.811 [2024-11-17T08:59:52.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.811 [2024-11-17T08:59:52.741Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:15.811 Verification LBA range: start 0x0 length 0x4000 00:09:15.811 NVMe0n1 : 10.07 15534.48 60.68 0.00 0.00 65662.99 14298.76 58148.31 00:09:15.811 [2024-11-17T08:59:52.741Z] =================================================================================================================== 00:09:15.811 [2024-11-17T08:59:52.741Z] Total : 15534.48 60.68 0.00 0.00 65662.99 14298.76 58148.31 00:09:15.811 0 00:09:15.811 08:59:52 -- target/queue_depth.sh@39 -- # killprocess 61925 00:09:15.811 08:59:52 -- common/autotest_common.sh@936 -- # '[' -z 61925 ']' 00:09:15.811 08:59:52 -- common/autotest_common.sh@940 -- # kill -0 61925 00:09:15.811 08:59:52 -- common/autotest_common.sh@941 -- # uname 00:09:15.811 08:59:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:15.811 08:59:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61925 00:09:15.811 08:59:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:15.811 08:59:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:15.811 killing process with pid 61925 00:09:15.811 08:59:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61925' 00:09:15.811 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.811 00:09:15.811 Latency(us) 00:09:15.812 [2024-11-17T08:59:52.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.812 [2024-11-17T08:59:52.742Z] =================================================================================================================== 00:09:15.812 [2024-11-17T08:59:52.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.812 08:59:52 -- common/autotest_common.sh@955 -- # kill 61925 00:09:15.812 08:59:52 -- common/autotest_common.sh@960 -- # wait 61925 00:09:16.071 08:59:52 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:16.071 08:59:52 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:16.071 08:59:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:16.071 08:59:52 -- nvmf/common.sh@116 -- # sync 00:09:16.071 08:59:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:16.071 08:59:52 -- nvmf/common.sh@119 -- # set +e 00:09:16.071 08:59:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:16.071 08:59:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:16.071 rmmod nvme_tcp 00:09:16.071 rmmod nvme_fabrics 00:09:16.071 rmmod nvme_keyring 00:09:16.071 08:59:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:16.331 08:59:52 -- nvmf/common.sh@123 -- # set -e 00:09:16.331 08:59:52 -- nvmf/common.sh@124 -- # return 0 00:09:16.331 08:59:52 -- nvmf/common.sh@477 -- # '[' -n 61893 ']' 00:09:16.331 08:59:52 -- nvmf/common.sh@478 -- # killprocess 61893 00:09:16.331 08:59:52 -- common/autotest_common.sh@936 -- # '[' -z 61893 ']' 00:09:16.331 08:59:52 -- common/autotest_common.sh@940 -- # kill -0 61893 00:09:16.331 08:59:53 -- common/autotest_common.sh@941 -- # uname 00:09:16.331 08:59:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.331 08:59:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61893 00:09:16.331 08:59:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:16.331 08:59:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:16.331 killing process with pid 61893 00:09:16.331 08:59:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61893' 00:09:16.331 08:59:53 -- common/autotest_common.sh@955 -- # kill 61893 00:09:16.331 08:59:53 -- common/autotest_common.sh@960 -- # wait 61893 00:09:16.331 08:59:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:16.331 08:59:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:16.331 08:59:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:16.331 08:59:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.331 08:59:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:16.331 08:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.331 08:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.331 08:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.331 08:59:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:16.591 00:09:16.591 real 0m13.565s 00:09:16.591 user 0m23.760s 00:09:16.591 sys 0m1.847s 00:09:16.591 08:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.591 ************************************ 00:09:16.591 END TEST nvmf_queue_depth 00:09:16.591 08:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:16.591 ************************************ 00:09:16.591 08:59:53 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.591 08:59:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:16.591 08:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.591 08:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:16.591 ************************************ 00:09:16.591 START TEST nvmf_multipath 00:09:16.591 ************************************ 00:09:16.591 08:59:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.591 * Looking for test storage... 00:09:16.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.591 08:59:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:16.591 08:59:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:16.591 08:59:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:16.591 08:59:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:16.591 08:59:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:16.591 08:59:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:16.591 08:59:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:16.591 08:59:53 -- scripts/common.sh@335 -- # IFS=.-: 00:09:16.591 08:59:53 -- scripts/common.sh@335 -- # read -ra ver1 00:09:16.591 08:59:53 -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.591 08:59:53 -- scripts/common.sh@336 -- # read -ra ver2 00:09:16.591 08:59:53 -- scripts/common.sh@337 -- # local 'op=<' 00:09:16.591 08:59:53 -- scripts/common.sh@339 -- # ver1_l=2 00:09:16.591 08:59:53 -- scripts/common.sh@340 -- # ver2_l=1 00:09:16.592 08:59:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:16.592 08:59:53 -- scripts/common.sh@343 -- # case "$op" in 00:09:16.592 08:59:53 -- scripts/common.sh@344 -- # : 1 00:09:16.592 08:59:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:16.592 08:59:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.592 08:59:53 -- scripts/common.sh@364 -- # decimal 1 00:09:16.592 08:59:53 -- scripts/common.sh@352 -- # local d=1 00:09:16.592 08:59:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.592 08:59:53 -- scripts/common.sh@354 -- # echo 1 00:09:16.592 08:59:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:16.592 08:59:53 -- scripts/common.sh@365 -- # decimal 2 00:09:16.592 08:59:53 -- scripts/common.sh@352 -- # local d=2 00:09:16.592 08:59:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.592 08:59:53 -- scripts/common.sh@354 -- # echo 2 00:09:16.592 08:59:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:16.592 08:59:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:16.592 08:59:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:16.592 08:59:53 -- scripts/common.sh@367 -- # return 0 00:09:16.592 08:59:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.592 08:59:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 08:59:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 08:59:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 08:59:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 08:59:53 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.592 08:59:53 -- nvmf/common.sh@7 -- # uname -s 00:09:16.592 08:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.592 08:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.592 08:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.592 08:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.592 08:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.592 08:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.592 08:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.592 08:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.592 08:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.592 08:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:16.592 08:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:16.592 08:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.592 08:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.592 08:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.592 08:59:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.592 08:59:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.592 08:59:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.592 08:59:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.592 08:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.592 08:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.592 08:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.592 08:59:53 -- paths/export.sh@5 -- # export PATH 00:09:16.592 08:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.592 08:59:53 -- nvmf/common.sh@46 -- # : 0 00:09:16.592 08:59:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:16.592 08:59:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:16.592 08:59:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:16.592 08:59:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.592 08:59:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.592 08:59:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:16.592 08:59:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:16.592 08:59:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:16.592 08:59:53 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.592 08:59:53 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.592 08:59:53 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:16.592 08:59:53 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.592 08:59:53 -- target/multipath.sh@43 -- # nvmftestinit 00:09:16.592 08:59:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:16.592 08:59:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.592 08:59:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:16.592 08:59:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:16.592 08:59:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:16.592 08:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.592 08:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.592 08:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.592 08:59:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:16.592 08:59:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:16.592 08:59:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.592 08:59:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.592 08:59:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:16.592 08:59:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:16.592 08:59:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.592 08:59:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.592 08:59:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.592 08:59:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.592 08:59:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.592 08:59:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.592 08:59:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.592 08:59:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.592 08:59:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:16.852 08:59:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:16.852 Cannot find device "nvmf_tgt_br" 00:09:16.852 08:59:53 -- nvmf/common.sh@154 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.852 Cannot find device "nvmf_tgt_br2" 00:09:16.852 08:59:53 -- nvmf/common.sh@155 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:16.852 08:59:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:16.852 Cannot find device "nvmf_tgt_br" 00:09:16.852 08:59:53 -- nvmf/common.sh@157 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:16.852 Cannot find device "nvmf_tgt_br2" 00:09:16.852 08:59:53 -- nvmf/common.sh@158 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:16.852 08:59:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:16.852 08:59:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.852 08:59:53 -- nvmf/common.sh@161 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.852 08:59:53 -- nvmf/common.sh@162 -- # true 00:09:16.852 08:59:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.852 08:59:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.852 08:59:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.852 08:59:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.852 08:59:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.852 08:59:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.852 08:59:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.852 08:59:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:16.852 08:59:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:16.852 08:59:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:16.852 08:59:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:16.852 08:59:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:16.852 08:59:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:16.852 08:59:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.852 08:59:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.852 08:59:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.852 08:59:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:16.852 08:59:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:16.852 08:59:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.852 08:59:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.112 08:59:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.112 08:59:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.112 08:59:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.112 08:59:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:17.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:17.112 00:09:17.112 --- 10.0.0.2 ping statistics --- 00:09:17.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.112 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:17.112 08:59:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:17.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:17.112 00:09:17.112 --- 10.0.0.3 ping statistics --- 00:09:17.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.112 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:17.112 08:59:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:17.112 00:09:17.112 --- 10.0.0.1 ping statistics --- 00:09:17.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.112 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:17.112 08:59:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.112 08:59:53 -- nvmf/common.sh@421 -- # return 0 00:09:17.112 08:59:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:17.112 08:59:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.112 08:59:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:17.112 08:59:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:17.112 08:59:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.112 08:59:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:17.112 08:59:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:17.112 08:59:53 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:17.112 08:59:53 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:17.112 08:59:53 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:17.112 08:59:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:17.112 08:59:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.112 08:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 08:59:53 -- nvmf/common.sh@469 -- # nvmfpid=62253 00:09:17.112 08:59:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.112 08:59:53 -- nvmf/common.sh@470 -- # waitforlisten 62253 00:09:17.112 08:59:53 -- common/autotest_common.sh@829 -- # '[' -z 62253 ']' 00:09:17.112 08:59:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.112 08:59:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.112 08:59:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.112 08:59:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.112 08:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:17.112 [2024-11-17 08:59:53.889753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:17.112 [2024-11-17 08:59:53.889879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.112 [2024-11-17 08:59:54.028306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.371 [2024-11-17 08:59:54.084221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:17.371 [2024-11-17 08:59:54.084389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.371 [2024-11-17 08:59:54.084402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.371 [2024-11-17 08:59:54.084410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.371 [2024-11-17 08:59:54.084630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.371 [2024-11-17 08:59:54.084760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.371 [2024-11-17 08:59:54.085312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.371 [2024-11-17 08:59:54.085347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.938 08:59:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.938 08:59:54 -- common/autotest_common.sh@862 -- # return 0 00:09:17.938 08:59:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:18.197 08:59:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:18.197 08:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:18.197 08:59:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.197 08:59:54 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:18.197 [2024-11-17 08:59:55.116622] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.456 08:59:55 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:18.456 Malloc0 00:09:18.714 08:59:55 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:18.714 08:59:55 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.973 08:59:55 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.233 [2024-11-17 08:59:56.049144] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.233 08:59:56 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:19.492 [2024-11-17 08:59:56.273263] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:19.492 08:59:56 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:19.751 08:59:56 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:19.751 08:59:56 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.751 08:59:56 -- common/autotest_common.sh@1187 -- # local i=0 00:09:19.751 08:59:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.751 08:59:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:19.751 08:59:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:21.661 08:59:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:21.661 08:59:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:21.661 08:59:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.661 08:59:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:21.661 08:59:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.661 08:59:58 -- common/autotest_common.sh@1197 -- # return 0 00:09:21.661 08:59:58 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:21.661 08:59:58 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:21.661 08:59:58 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:21.661 08:59:58 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:21.661 08:59:58 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:21.661 08:59:58 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:21.661 08:59:58 -- target/multipath.sh@38 -- # return 0 00:09:21.919 08:59:58 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:21.919 08:59:58 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:21.919 08:59:58 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:21.919 08:59:58 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:21.919 08:59:58 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:21.919 08:59:58 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:21.919 08:59:58 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:21.919 08:59:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:21.919 08:59:58 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.919 08:59:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.919 08:59:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.919 08:59:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:21.919 08:59:58 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:21.919 08:59:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:21.919 08:59:58 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.919 08:59:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.919 08:59:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.919 08:59:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:21.919 08:59:58 -- target/multipath.sh@85 -- # echo numa 00:09:21.919 08:59:58 -- target/multipath.sh@88 -- # fio_pid=62344 00:09:21.920 08:59:58 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:21.920 08:59:58 -- target/multipath.sh@90 -- # sleep 1 00:09:21.920 [global] 00:09:21.920 thread=1 00:09:21.920 invalidate=1 00:09:21.920 rw=randrw 00:09:21.920 time_based=1 00:09:21.920 runtime=6 00:09:21.920 ioengine=libaio 00:09:21.920 direct=1 00:09:21.920 bs=4096 00:09:21.920 iodepth=128 00:09:21.920 norandommap=0 00:09:21.920 numjobs=1 00:09:21.920 00:09:21.920 verify_dump=1 00:09:21.920 verify_backlog=512 00:09:21.920 verify_state_save=0 00:09:21.920 do_verify=1 00:09:21.920 verify=crc32c-intel 00:09:21.920 [job0] 00:09:21.920 filename=/dev/nvme0n1 00:09:21.920 Could not set queue depth (nvme0n1) 00:09:21.920 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.920 fio-3.35 00:09:21.920 Starting 1 thread 00:09:22.863 08:59:59 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:23.122 08:59:59 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:23.382 09:00:00 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:23.382 09:00:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:23.382 09:00:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.382 09:00:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.382 09:00:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.382 09:00:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.382 09:00:00 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:23.382 09:00:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:23.382 09:00:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.382 09:00:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.382 09:00:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.382 09:00:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.382 09:00:00 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:23.641 09:00:00 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:23.901 09:00:00 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:23.901 09:00:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:23.901 09:00:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.901 09:00:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.901 09:00:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.901 09:00:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.901 09:00:00 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:23.901 09:00:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:23.901 09:00:00 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.901 09:00:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.901 09:00:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.901 09:00:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.901 09:00:00 -- target/multipath.sh@104 -- # wait 62344 00:09:28.103 00:09:28.103 job0: (groupid=0, jobs=1): err= 0: pid=62365: Sun Nov 17 09:00:04 2024 00:09:28.103 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(250MiB/6006msec) 00:09:28.103 slat (usec): min=7, max=5843, avg=54.82, stdev=231.68 00:09:28.103 clat (usec): min=1515, max=14710, avg=8115.76, stdev=1424.70 00:09:28.103 lat (usec): min=1530, max=16141, avg=8170.58, stdev=1429.74 00:09:28.103 clat percentiles (usec): 00:09:28.103 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7308], 00:09:28.103 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:09:28.103 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11338], 00:09:28.103 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13566], 99.95th=[14091], 00:09:28.103 | 99.99th=[14353] 00:09:28.103 bw ( KiB/s): min= 9032, max=26824, per=52.04%, avg=22208.00, stdev=5787.83, samples=11 00:09:28.103 iops : min= 2258, max= 6708, avg=5552.00, stdev=1447.04, samples=11 00:09:28.103 write: IOPS=6268, BW=24.5MiB/s (25.7MB/s)(131MiB/5367msec); 0 zone resets 00:09:28.103 slat (usec): min=14, max=1893, avg=64.73, stdev=158.51 00:09:28.103 clat (usec): min=2487, max=14170, avg=7191.87, stdev=1264.97 00:09:28.103 lat (usec): min=2515, max=14194, avg=7256.60, stdev=1269.51 00:09:28.103 clat percentiles (usec): 00:09:28.103 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 5735], 20.00th=[ 6718], 00:09:28.103 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:28.103 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:28.103 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12649], 99.95th=[13042], 00:09:28.103 | 99.99th=[13435] 00:09:28.103 bw ( KiB/s): min= 9480, max=26472, per=88.67%, avg=22233.45, stdev=5472.84, samples=11 00:09:28.103 iops : min= 2370, max= 6618, avg=5558.36, stdev=1368.21, samples=11 00:09:28.103 lat (msec) : 2=0.02%, 4=1.79%, 10=92.52%, 20=5.67% 00:09:28.103 cpu : usr=5.25%, sys=21.72%, ctx=5504, majf=0, minf=90 00:09:28.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:28.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.103 issued rwts: total=64071,33642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.103 00:09:28.103 Run status group 0 (all jobs): 00:09:28.103 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=250MiB (262MB), run=6006-6006msec 00:09:28.103 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=131MiB (138MB), run=5367-5367msec 00:09:28.103 00:09:28.103 Disk stats (read/write): 00:09:28.103 nvme0n1: ios=63067/33015, merge=0/0, ticks=489339/223154, in_queue=712493, util=98.60% 00:09:28.103 09:00:04 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:28.361 09:00:05 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:28.620 09:00:05 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:28.620 09:00:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:28.620 09:00:05 -- target/multipath.sh@22 -- # local timeout=20 00:09:28.620 09:00:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:28.620 09:00:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:28.620 09:00:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:28.620 09:00:05 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:28.620 09:00:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:28.620 09:00:05 -- target/multipath.sh@22 -- # local timeout=20 00:09:28.620 09:00:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:28.620 09:00:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.620 09:00:05 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:28.620 09:00:05 -- target/multipath.sh@113 -- # echo round-robin 00:09:28.620 09:00:05 -- target/multipath.sh@116 -- # fio_pid=62446 00:09:28.620 09:00:05 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:28.620 09:00:05 -- target/multipath.sh@118 -- # sleep 1 00:09:28.620 [global] 00:09:28.620 thread=1 00:09:28.620 invalidate=1 00:09:28.620 rw=randrw 00:09:28.620 time_based=1 00:09:28.620 runtime=6 00:09:28.620 ioengine=libaio 00:09:28.620 direct=1 00:09:28.620 bs=4096 00:09:28.620 iodepth=128 00:09:28.620 norandommap=0 00:09:28.620 numjobs=1 00:09:28.620 00:09:28.620 verify_dump=1 00:09:28.620 verify_backlog=512 00:09:28.620 verify_state_save=0 00:09:28.620 do_verify=1 00:09:28.620 verify=crc32c-intel 00:09:28.620 [job0] 00:09:28.620 filename=/dev/nvme0n1 00:09:28.620 Could not set queue depth (nvme0n1) 00:09:28.879 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.879 fio-3.35 00:09:28.879 Starting 1 thread 00:09:29.815 09:00:06 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:30.074 09:00:06 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:30.333 09:00:07 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:30.333 09:00:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:30.333 09:00:07 -- target/multipath.sh@22 -- # local timeout=20 00:09:30.333 09:00:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:30.333 09:00:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:30.333 09:00:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:30.333 09:00:07 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:30.333 09:00:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:30.333 09:00:07 -- target/multipath.sh@22 -- # local timeout=20 00:09:30.333 09:00:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:30.333 09:00:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:30.333 09:00:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:30.333 09:00:07 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:30.591 09:00:07 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:30.849 09:00:07 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:30.849 09:00:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:30.849 09:00:07 -- target/multipath.sh@22 -- # local timeout=20 00:09:30.849 09:00:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:30.849 09:00:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:30.849 09:00:07 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:30.849 09:00:07 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:30.849 09:00:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:30.849 09:00:07 -- target/multipath.sh@22 -- # local timeout=20 00:09:30.849 09:00:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:30.849 09:00:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:30.849 09:00:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:30.849 09:00:07 -- target/multipath.sh@132 -- # wait 62446 00:09:35.042 00:09:35.042 job0: (groupid=0, jobs=1): err= 0: pid=62467: Sun Nov 17 09:00:11 2024 00:09:35.042 read: IOPS=11.9k, BW=46.3MiB/s (48.5MB/s)(278MiB/6002msec) 00:09:35.042 slat (usec): min=5, max=9151, avg=41.70, stdev=198.75 00:09:35.042 clat (usec): min=1394, max=18805, avg=7320.00, stdev=1727.78 00:09:35.042 lat (usec): min=1405, max=18831, avg=7361.70, stdev=1741.75 00:09:35.042 clat percentiles (usec): 00:09:35.042 | 1.00th=[ 3556], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5800], 00:09:35.042 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7767], 00:09:35.042 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10552], 00:09:35.042 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13566], 99.95th=[13960], 00:09:35.042 | 99.99th=[13960] 00:09:35.042 bw ( KiB/s): min=12640, max=41952, per=54.68%, avg=25921.36, stdev=8573.38, samples=11 00:09:35.042 iops : min= 3160, max=10488, avg=6480.27, stdev=2143.38, samples=11 00:09:35.042 write: IOPS=7105, BW=27.8MiB/s (29.1MB/s)(150MiB/5402msec); 0 zone resets 00:09:35.042 slat (usec): min=12, max=2885, avg=52.71, stdev=132.74 00:09:35.042 clat (usec): min=1591, max=13820, avg=6284.96, stdev=1686.01 00:09:35.042 lat (usec): min=1647, max=13844, avg=6337.67, stdev=1700.21 00:09:35.042 clat percentiles (usec): 00:09:35.042 | 1.00th=[ 2704], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4424], 00:09:35.042 | 30.00th=[ 5276], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7111], 00:09:35.042 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8291], 00:09:35.042 | 99.00th=[10159], 99.50th=[11076], 99.90th=[11863], 99.95th=[12649], 00:09:35.042 | 99.99th=[13435] 00:09:35.042 bw ( KiB/s): min=13312, max=41320, per=91.10%, avg=25893.82, stdev=8379.95, samples=11 00:09:35.042 iops : min= 3328, max=10330, avg=6473.36, stdev=2095.02, samples=11 00:09:35.042 lat (msec) : 2=0.06%, 4=6.34%, 10=89.58%, 20=4.03% 00:09:35.042 cpu : usr=5.85%, sys=22.93%, ctx=5888, majf=0, minf=127 00:09:35.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:35.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.042 issued rwts: total=71137,38384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.042 00:09:35.042 Run status group 0 (all jobs): 00:09:35.042 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=278MiB (291MB), run=6002-6002msec 00:09:35.042 WRITE: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=150MiB (157MB), run=5402-5402msec 00:09:35.042 00:09:35.042 Disk stats (read/write): 00:09:35.042 nvme0n1: ios=69715/38250, merge=0/0, ticks=485937/224099, in_queue=710036, util=98.51% 00:09:35.042 09:00:11 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:35.042 09:00:11 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.042 09:00:11 -- common/autotest_common.sh@1208 -- # local i=0 00:09:35.042 09:00:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:35.042 09:00:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.042 09:00:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:35.042 09:00:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.042 09:00:11 -- common/autotest_common.sh@1220 -- # return 0 00:09:35.042 09:00:11 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.301 09:00:12 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:35.301 09:00:12 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:35.301 09:00:12 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:35.301 09:00:12 -- target/multipath.sh@144 -- # nvmftestfini 00:09:35.301 09:00:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:35.301 09:00:12 -- nvmf/common.sh@116 -- # sync 00:09:35.301 09:00:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:35.301 09:00:12 -- nvmf/common.sh@119 -- # set +e 00:09:35.301 09:00:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:35.301 09:00:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:35.301 rmmod nvme_tcp 00:09:35.301 rmmod nvme_fabrics 00:09:35.301 rmmod nvme_keyring 00:09:35.301 09:00:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:35.301 09:00:12 -- nvmf/common.sh@123 -- # set -e 00:09:35.301 09:00:12 -- nvmf/common.sh@124 -- # return 0 00:09:35.301 09:00:12 -- nvmf/common.sh@477 -- # '[' -n 62253 ']' 00:09:35.301 09:00:12 -- nvmf/common.sh@478 -- # killprocess 62253 00:09:35.301 09:00:12 -- common/autotest_common.sh@936 -- # '[' -z 62253 ']' 00:09:35.301 09:00:12 -- common/autotest_common.sh@940 -- # kill -0 62253 00:09:35.301 09:00:12 -- common/autotest_common.sh@941 -- # uname 00:09:35.301 09:00:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.301 09:00:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62253 00:09:35.559 killing process with pid 62253 00:09:35.560 09:00:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:35.560 09:00:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:35.560 09:00:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62253' 00:09:35.560 09:00:12 -- common/autotest_common.sh@955 -- # kill 62253 00:09:35.560 09:00:12 -- common/autotest_common.sh@960 -- # wait 62253 00:09:35.560 09:00:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:35.560 09:00:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:35.560 09:00:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:35.560 09:00:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.560 09:00:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:35.560 09:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.560 09:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.560 09:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.560 09:00:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:35.560 00:09:35.560 real 0m19.158s 00:09:35.560 user 1m11.541s 00:09:35.560 sys 0m9.790s 00:09:35.560 09:00:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.560 09:00:12 -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 ************************************ 00:09:35.560 END TEST nvmf_multipath 00:09:35.560 ************************************ 00:09:35.820 09:00:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.820 09:00:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.820 09:00:12 -- common/autotest_common.sh@10 -- # set +x 00:09:35.820 ************************************ 00:09:35.820 START TEST nvmf_zcopy 00:09:35.820 ************************************ 00:09:35.820 09:00:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.820 * Looking for test storage... 00:09:35.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.820 09:00:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:35.820 09:00:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:35.820 09:00:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:35.820 09:00:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:35.820 09:00:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:35.820 09:00:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:35.820 09:00:12 -- scripts/common.sh@335 -- # IFS=.-: 00:09:35.820 09:00:12 -- scripts/common.sh@335 -- # read -ra ver1 00:09:35.820 09:00:12 -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.820 09:00:12 -- scripts/common.sh@336 -- # read -ra ver2 00:09:35.820 09:00:12 -- scripts/common.sh@337 -- # local 'op=<' 00:09:35.820 09:00:12 -- scripts/common.sh@339 -- # ver1_l=2 00:09:35.820 09:00:12 -- scripts/common.sh@340 -- # ver2_l=1 00:09:35.820 09:00:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:35.820 09:00:12 -- scripts/common.sh@343 -- # case "$op" in 00:09:35.820 09:00:12 -- scripts/common.sh@344 -- # : 1 00:09:35.820 09:00:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:35.820 09:00:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.820 09:00:12 -- scripts/common.sh@364 -- # decimal 1 00:09:35.820 09:00:12 -- scripts/common.sh@352 -- # local d=1 00:09:35.820 09:00:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.820 09:00:12 -- scripts/common.sh@354 -- # echo 1 00:09:35.820 09:00:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:35.820 09:00:12 -- scripts/common.sh@365 -- # decimal 2 00:09:35.820 09:00:12 -- scripts/common.sh@352 -- # local d=2 00:09:35.820 09:00:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.820 09:00:12 -- scripts/common.sh@354 -- # echo 2 00:09:35.820 09:00:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:35.820 09:00:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:35.820 09:00:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:35.820 09:00:12 -- scripts/common.sh@367 -- # return 0 00:09:35.820 09:00:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:35.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.820 --rc genhtml_branch_coverage=1 00:09:35.820 --rc genhtml_function_coverage=1 00:09:35.820 --rc genhtml_legend=1 00:09:35.820 --rc geninfo_all_blocks=1 00:09:35.820 --rc geninfo_unexecuted_blocks=1 00:09:35.820 00:09:35.820 ' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:35.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.820 --rc genhtml_branch_coverage=1 00:09:35.820 --rc genhtml_function_coverage=1 00:09:35.820 --rc genhtml_legend=1 00:09:35.820 --rc geninfo_all_blocks=1 00:09:35.820 --rc geninfo_unexecuted_blocks=1 00:09:35.820 00:09:35.820 ' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:35.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.820 --rc genhtml_branch_coverage=1 00:09:35.820 --rc genhtml_function_coverage=1 00:09:35.820 --rc genhtml_legend=1 00:09:35.820 --rc geninfo_all_blocks=1 00:09:35.820 --rc geninfo_unexecuted_blocks=1 00:09:35.820 00:09:35.820 ' 00:09:35.820 09:00:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:35.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.820 --rc genhtml_branch_coverage=1 00:09:35.820 --rc genhtml_function_coverage=1 00:09:35.820 --rc genhtml_legend=1 00:09:35.820 --rc geninfo_all_blocks=1 00:09:35.820 --rc geninfo_unexecuted_blocks=1 00:09:35.820 00:09:35.820 ' 00:09:35.820 09:00:12 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.820 09:00:12 -- nvmf/common.sh@7 -- # uname -s 00:09:35.820 09:00:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.820 09:00:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.820 09:00:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.820 09:00:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.820 09:00:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.820 09:00:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.820 09:00:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.820 09:00:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.820 09:00:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.820 09:00:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:35.820 09:00:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:09:35.820 09:00:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.820 09:00:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.820 09:00:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.820 09:00:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.820 09:00:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.820 09:00:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.820 09:00:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.820 09:00:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.820 09:00:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.820 09:00:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.820 09:00:12 -- paths/export.sh@5 -- # export PATH 00:09:35.820 09:00:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.820 09:00:12 -- nvmf/common.sh@46 -- # : 0 00:09:35.820 09:00:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:35.820 09:00:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:35.820 09:00:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:35.820 09:00:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.820 09:00:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.820 09:00:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:35.820 09:00:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:35.820 09:00:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:35.820 09:00:12 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:35.820 09:00:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:35.820 09:00:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.820 09:00:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:35.820 09:00:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:35.820 09:00:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:35.820 09:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.820 09:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.820 09:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.820 09:00:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:35.820 09:00:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:35.820 09:00:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.820 09:00:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.820 09:00:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:35.820 09:00:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:35.820 09:00:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.820 09:00:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.820 09:00:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.820 09:00:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.820 09:00:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.820 09:00:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.820 09:00:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.820 09:00:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.820 09:00:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:35.820 09:00:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:36.078 Cannot find device "nvmf_tgt_br" 00:09:36.078 09:00:12 -- nvmf/common.sh@154 -- # true 00:09:36.078 09:00:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.078 Cannot find device "nvmf_tgt_br2" 00:09:36.078 09:00:12 -- nvmf/common.sh@155 -- # true 00:09:36.078 09:00:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:36.079 09:00:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:36.079 Cannot find device "nvmf_tgt_br" 00:09:36.079 09:00:12 -- nvmf/common.sh@157 -- # true 00:09:36.079 09:00:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:36.079 Cannot find device "nvmf_tgt_br2" 00:09:36.079 09:00:12 -- nvmf/common.sh@158 -- # true 00:09:36.079 09:00:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:36.079 09:00:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:36.079 09:00:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.079 09:00:12 -- nvmf/common.sh@161 -- # true 00:09:36.079 09:00:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.079 09:00:12 -- nvmf/common.sh@162 -- # true 00:09:36.079 09:00:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.079 09:00:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.079 09:00:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.079 09:00:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.079 09:00:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.079 09:00:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.079 09:00:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.079 09:00:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.079 09:00:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.079 09:00:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:36.079 09:00:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:36.079 09:00:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:36.079 09:00:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:36.079 09:00:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.079 09:00:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.079 09:00:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.079 09:00:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:36.079 09:00:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:36.079 09:00:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.337 09:00:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.337 09:00:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.337 09:00:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.337 09:00:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.337 09:00:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:36.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:36.337 00:09:36.337 --- 10.0.0.2 ping statistics --- 00:09:36.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.337 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:36.337 09:00:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:36.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:36.337 00:09:36.337 --- 10.0.0.3 ping statistics --- 00:09:36.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.337 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:36.337 09:00:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:36.337 00:09:36.337 --- 10.0.0.1 ping statistics --- 00:09:36.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.337 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:36.337 09:00:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.338 09:00:13 -- nvmf/common.sh@421 -- # return 0 00:09:36.338 09:00:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.338 09:00:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.338 09:00:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.338 09:00:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.338 09:00:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.338 09:00:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.338 09:00:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.338 09:00:13 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:36.338 09:00:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.338 09:00:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.338 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:09:36.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.338 09:00:13 -- nvmf/common.sh@469 -- # nvmfpid=62721 00:09:36.338 09:00:13 -- nvmf/common.sh@470 -- # waitforlisten 62721 00:09:36.338 09:00:13 -- common/autotest_common.sh@829 -- # '[' -z 62721 ']' 00:09:36.338 09:00:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.338 09:00:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.338 09:00:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.338 09:00:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.338 09:00:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.338 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:09:36.338 [2024-11-17 09:00:13.151861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.338 [2024-11-17 09:00:13.152758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.596 [2024-11-17 09:00:13.294447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.596 [2024-11-17 09:00:13.363427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.596 [2024-11-17 09:00:13.363582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.596 [2024-11-17 09:00:13.363626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.596 [2024-11-17 09:00:13.363640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.596 [2024-11-17 09:00:13.363679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.164 09:00:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.164 09:00:14 -- common/autotest_common.sh@862 -- # return 0 00:09:37.164 09:00:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.164 09:00:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.164 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 09:00:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.164 09:00:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:37.164 09:00:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:37.164 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.164 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 [2024-11-17 09:00:14.081284] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.164 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.164 09:00:14 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.164 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.164 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 09:00:14 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.424 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.424 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 [2024-11-17 09:00:14.097382] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.424 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 09:00:14 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.424 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.424 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 09:00:14 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:37.424 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.424 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 malloc0 00:09:37.424 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 09:00:14 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:37.424 09:00:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.424 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 09:00:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 09:00:14 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:37.424 09:00:14 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:37.424 09:00:14 -- nvmf/common.sh@520 -- # config=() 00:09:37.424 09:00:14 -- nvmf/common.sh@520 -- # local subsystem config 00:09:37.424 09:00:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:37.424 09:00:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:37.424 { 00:09:37.424 "params": { 00:09:37.424 "name": "Nvme$subsystem", 00:09:37.424 "trtype": "$TEST_TRANSPORT", 00:09:37.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.424 "adrfam": "ipv4", 00:09:37.424 "trsvcid": "$NVMF_PORT", 00:09:37.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.424 "hdgst": ${hdgst:-false}, 00:09:37.424 "ddgst": ${ddgst:-false} 00:09:37.424 }, 00:09:37.424 "method": "bdev_nvme_attach_controller" 00:09:37.424 } 00:09:37.424 EOF 00:09:37.424 )") 00:09:37.424 09:00:14 -- nvmf/common.sh@542 -- # cat 00:09:37.424 09:00:14 -- nvmf/common.sh@544 -- # jq . 00:09:37.424 09:00:14 -- nvmf/common.sh@545 -- # IFS=, 00:09:37.424 09:00:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:37.424 "params": { 00:09:37.424 "name": "Nvme1", 00:09:37.424 "trtype": "tcp", 00:09:37.424 "traddr": "10.0.0.2", 00:09:37.424 "adrfam": "ipv4", 00:09:37.424 "trsvcid": "4420", 00:09:37.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.424 "hdgst": false, 00:09:37.424 "ddgst": false 00:09:37.424 }, 00:09:37.424 "method": "bdev_nvme_attach_controller" 00:09:37.424 }' 00:09:37.424 [2024-11-17 09:00:14.188801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.424 [2024-11-17 09:00:14.188882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62754 ] 00:09:37.424 [2024-11-17 09:00:14.330021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.684 [2024-11-17 09:00:14.401645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.684 Running I/O for 10 seconds... 00:09:47.702 00:09:47.702 Latency(us) 00:09:47.702 [2024-11-17T09:00:24.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.702 [2024-11-17T09:00:24.632Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:47.702 Verification LBA range: start 0x0 length 0x1000 00:09:47.702 Nvme1n1 : 10.01 10108.85 78.98 0.00 0.00 12630.03 1385.19 20494.89 00:09:47.702 [2024-11-17T09:00:24.632Z] =================================================================================================================== 00:09:47.702 [2024-11-17T09:00:24.632Z] Total : 10108.85 78.98 0.00 0.00 12630.03 1385.19 20494.89 00:09:47.962 09:00:24 -- target/zcopy.sh@39 -- # perfpid=62877 00:09:47.962 09:00:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:47.962 09:00:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:47.962 09:00:24 -- nvmf/common.sh@520 -- # config=() 00:09:47.962 09:00:24 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:47.962 09:00:24 -- common/autotest_common.sh@10 -- # set +x 00:09:47.962 09:00:24 -- nvmf/common.sh@520 -- # local subsystem config 00:09:47.962 09:00:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:47.962 09:00:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:47.962 { 00:09:47.962 "params": { 00:09:47.962 "name": "Nvme$subsystem", 00:09:47.962 "trtype": "$TEST_TRANSPORT", 00:09:47.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.962 "adrfam": "ipv4", 00:09:47.962 "trsvcid": "$NVMF_PORT", 00:09:47.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.962 "hdgst": ${hdgst:-false}, 00:09:47.962 "ddgst": ${ddgst:-false} 00:09:47.962 }, 00:09:47.962 "method": "bdev_nvme_attach_controller" 00:09:47.962 } 00:09:47.962 EOF 00:09:47.962 )") 00:09:47.962 09:00:24 -- nvmf/common.sh@542 -- # cat 00:09:47.962 [2024-11-17 09:00:24.740716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.740929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 09:00:24 -- nvmf/common.sh@544 -- # jq . 00:09:47.962 09:00:24 -- nvmf/common.sh@545 -- # IFS=, 00:09:47.962 09:00:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:47.962 "params": { 00:09:47.962 "name": "Nvme1", 00:09:47.962 "trtype": "tcp", 00:09:47.962 "traddr": "10.0.0.2", 00:09:47.962 "adrfam": "ipv4", 00:09:47.962 "trsvcid": "4420", 00:09:47.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.962 "hdgst": false, 00:09:47.962 "ddgst": false 00:09:47.962 }, 00:09:47.962 "method": "bdev_nvme_attach_controller" 00:09:47.962 }' 00:09:47.962 [2024-11-17 09:00:24.752717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.752750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.764713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.764739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.776695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.776720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.788708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.788732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.791541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.962 [2024-11-17 09:00:24.792041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:09:47.962 [2024-11-17 09:00:24.800724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.800754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.812721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.812748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.824740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.824780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.836716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.836739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.848718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.848740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.860724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.860749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.872726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.872751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-11-17 09:00:24.884748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-11-17 09:00:24.884798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.896763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.896804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.908752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.908784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.920750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.920777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.930375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.221 [2024-11-17 09:00:24.932763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.932788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.944775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.944809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.956774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.956802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.968792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.968846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.980779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.980806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:24.989036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.221 [2024-11-17 09:00:24.992775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:24.992800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.004797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.004831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.016802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.016841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.028804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.028839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.040813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.040853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.052814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.052856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.064852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.064894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.076863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.076902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.088859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.088889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.100878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.100907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.112960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.112996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 Running I/O for 5 seconds... 00:09:48.221 [2024-11-17 09:00:25.124994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.125041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.221 [2024-11-17 09:00:25.141561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.221 [2024-11-17 09:00:25.141626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.157210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.157243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.168323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.168356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.184689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.184719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.202528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.202769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.217175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.217208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.232579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.232637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.250316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.250501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.265562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.265776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.283387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.283419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.298950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.298999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.316671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.316703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.333237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.333270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.349720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.349753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.367318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.367350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.383266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.383301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.481 [2024-11-17 09:00:25.399429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.481 [2024-11-17 09:00:25.399461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.409002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.409034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.425499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.425533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.442714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.442747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.459263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.459298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.474940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.474988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.492743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.492776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.508156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.508340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.525419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.525451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.542214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.542246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.558261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.558293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.575609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.575806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.591979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.592012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.609219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.609252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.624855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.624888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.636135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.636166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.740 [2024-11-17 09:00:25.652323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.740 [2024-11-17 09:00:25.652355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.670996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.671031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.684494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.684535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.701019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.701063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.718367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.718685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.734243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.734280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.751887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.751918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.767141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.767172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.778192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.778239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.793550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.793583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.810205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.810237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.827440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.827633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.842958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.843149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.860424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.860456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.877321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.877355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.893111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.893142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.000 [2024-11-17 09:00:25.911492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.000 [2024-11-17 09:00:25.911523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:25.927758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:25.927799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:25.945135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:25.945167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:25.960815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:25.960856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:25.977557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:25.977644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:25.995221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:25.995274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.010403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.010558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.021268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.021310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.037936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.038248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.051812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.051847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.068074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.068123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.084704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.084736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.101504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.101537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.118946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.119128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.134866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.134898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.152022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.152054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.166813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.166846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.259 [2024-11-17 09:00:26.184026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.259 [2024-11-17 09:00:26.184062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.200396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.200428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.216212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.216244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.234421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.234636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.249706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.249739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.260546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.260578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.276836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.276868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.293099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.293132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.311781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.311814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.326277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.326309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.337816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.337849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.353465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.353532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.370154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.370186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.387309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.387490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.402912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.402945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.414256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.414450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.519 [2024-11-17 09:00:26.430583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.519 [2024-11-17 09:00:26.430642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.447638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.447698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.463507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.463549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.474690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.474731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.490784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.490851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.505778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.505830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.521496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.521532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.538138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.538169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.554823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.554854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.572207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.572394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.587963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.588011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.605597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.605658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.622101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.622133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.639531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.639563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.653424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.653455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.670092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.670254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.687905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.687938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.779 [2024-11-17 09:00:26.702435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.779 [2024-11-17 09:00:26.702469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.718107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.718138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.735736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.735768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.750780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.750812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.767060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.767090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.784183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.784215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.800489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.800521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.817519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.817552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.833593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.833652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.851296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.851329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.867742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.867777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.883631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.883674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.901321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.901354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.915240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.915274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.930801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.930833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.948254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.948285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.039 [2024-11-17 09:00:26.963813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.039 [2024-11-17 09:00:26.963846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:26.982132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:26.982313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:26.996805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:26.997060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.013529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.013574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.029050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.029098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.046671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.046702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.063017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.063050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.079713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.079745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.095852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.095884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.113859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.113893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.128163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.128194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.142406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.142438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.158807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.158837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.175875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.176094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.191729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.191762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.207145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.207325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.299 [2024-11-17 09:00:27.224279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.299 [2024-11-17 09:00:27.224314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.238958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.239007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.248423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.248461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.264501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.264556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.282450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.282647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.297990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.298150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.316128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.316160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.329962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.329995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.346152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.346195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.363587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.363681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.378937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.379006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.387826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.387859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.402922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.403088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.418949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.418981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.436952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.437001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.451058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.451091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.559 [2024-11-17 09:00:27.467559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.559 [2024-11-17 09:00:27.467656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.485511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.485545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.499951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.499998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.510759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.510791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.527059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.527090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.543559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.543640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.560490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.560537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.577252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.577292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.594224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.594415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.610420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.610453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.628091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.628122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.644089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.644137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.660064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.660095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.678390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.678423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.692407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.692438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.708288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.708321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.724434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.724466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.817 [2024-11-17 09:00:27.743095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.817 [2024-11-17 09:00:27.743141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.757980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.758195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.774881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.774913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.790433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.790465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.807215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.807247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.822967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.823016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.840480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.840696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.858080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.858111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.874292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.874324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.076 [2024-11-17 09:00:27.891425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.076 [2024-11-17 09:00:27.891630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.907868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.907900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.925704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.925736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.940796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.940829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.952073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.952105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.968153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.968185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:27.984399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:27.984431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.077 [2024-11-17 09:00:28.002433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.077 [2024-11-17 09:00:28.002464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.016587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.016677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.031958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.032006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.050383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.050582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.066252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.066287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.084374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.084571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.336 [2024-11-17 09:00:28.100319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.336 [2024-11-17 09:00:28.100494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.117764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.117971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.133441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.133696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.151180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.151359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.166980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.167156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.183689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.183863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.200616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.200823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.216146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.216320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.233505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.233734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.337 [2024-11-17 09:00:28.249217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.337 [2024-11-17 09:00:28.249393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.267939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.268113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.282929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.283138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.300754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.300932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.318589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.318750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.333914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.334090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.343588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.343793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.359320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.359480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.376978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.377144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.391999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.392055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.408798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.408829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.424371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.424404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.442975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.443007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.456746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.456778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.472885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.472919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.489250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.489284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.507433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.596 [2024-11-17 09:00:28.507466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.596 [2024-11-17 09:00:28.522156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.522348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.539143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.539193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.554182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.554214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.569973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.570005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.586409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.586451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.602333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.602365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.613399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.613640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.630078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.630110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.645396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.645628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.656427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.656648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.673133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.673162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.689429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.689483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.706532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.706563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.722818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.722849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.739431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.739463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.757142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.757174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.856 [2024-11-17 09:00:28.771441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.856 [2024-11-17 09:00:28.771473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.788031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.788212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.803131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.803317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.818983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.819018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.836359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.836392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.853224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.853256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.871068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.871100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.886282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.886467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.897704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.897916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.914267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.914444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.930473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.930732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.947581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.947771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.964950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.965119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.979235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.979419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:28.996142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:28.996348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:29.012326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:29.012689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.116 [2024-11-17 09:00:29.028837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.116 [2024-11-17 09:00:29.029019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.047060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.047273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.062047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.062211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.080975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.081186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.095045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.095078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.111079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.111111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.127550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.127582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.144529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.144768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.161904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.161937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.178302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.178333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.195474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.195684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.211556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.211796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.228248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.228280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.245659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.245692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.261825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.261870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.278751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.278782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.376 [2024-11-17 09:00:29.295748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.376 [2024-11-17 09:00:29.295779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.310759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.310813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.326403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.326731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.345283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.345354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.360732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.360767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.372243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.372276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.388215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.388247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.404897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.404931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.421438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.421513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.438728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.438757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.454459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.454651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.472193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.472225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.487661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.487717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.505386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.635 [2024-11-17 09:00:29.505438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.635 [2024-11-17 09:00:29.522009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.636 [2024-11-17 09:00:29.522191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.636 [2024-11-17 09:00:29.539238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.636 [2024-11-17 09:00:29.539270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.636 [2024-11-17 09:00:29.557220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.636 [2024-11-17 09:00:29.557254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.571471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.571503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.589522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.589558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.603297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.603330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.618834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.618867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.636804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.636836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.652469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.652678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.669103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.669134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.685805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.685839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.702216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.702259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.720491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.720543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.736087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.736120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.753904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.753936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.769140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.769172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.786726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.786757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.803606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.803666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.895 [2024-11-17 09:00:29.821384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.895 [2024-11-17 09:00:29.821416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.835411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.835443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.850517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.850747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.867559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.867768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.883880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.884112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.899258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.899434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.910504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.910862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.926348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.926524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.943901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.944062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.959205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.959379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.976777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.977036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:29.992570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:29.992800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:30.011052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:30.011201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:30.025694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:30.025872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:30.042386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:30.042559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:30.058387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:30.058560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.154 [2024-11-17 09:00:30.075648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.154 [2024-11-17 09:00:30.075840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.091779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.091954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.109321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.109645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.123493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.123676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 00:09:53.414 Latency(us) 00:09:53.414 [2024-11-17T09:00:30.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.414 [2024-11-17T09:00:30.344Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:53.414 Nvme1n1 : 5.01 13102.48 102.36 0.00 0.00 9757.53 3961.95 20256.58 00:09:53.414 [2024-11-17T09:00:30.344Z] =================================================================================================================== 00:09:53.414 [2024-11-17T09:00:30.344Z] Total : 13102.48 102.36 0.00 0.00 9757.53 3961.95 20256.58 00:09:53.414 [2024-11-17 09:00:30.133212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.133386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.145239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.145485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.157250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.157299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.169244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.169285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.181248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.181290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.193245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.193284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.205249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.205288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.217219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.217241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.229223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.229246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.241275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.241332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.253264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.253310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.265231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.265255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.277257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.277313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.289274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.289314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.301261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.301285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 [2024-11-17 09:00:30.313246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.414 [2024-11-17 09:00:30.313269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.414 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62877) - No such process 00:09:53.414 09:00:30 -- target/zcopy.sh@49 -- # wait 62877 00:09:53.414 09:00:30 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.414 09:00:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.414 09:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:53.414 09:00:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.414 09:00:30 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:53.414 09:00:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.414 09:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:53.673 delay0 00:09:53.673 09:00:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.673 09:00:30 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:53.673 09:00:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.673 09:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:53.673 09:00:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.673 09:00:30 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:53.673 [2024-11-17 09:00:30.514157] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:00.242 Initializing NVMe Controllers 00:10:00.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:00.242 Initialization complete. Launching workers. 00:10:00.242 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 89 00:10:00.242 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 33 00:10:00.242 success 249, unsuccess 127, failed 0 00:10:00.242 09:00:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:00.242 09:00:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:00.242 09:00:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:00.242 09:00:36 -- nvmf/common.sh@116 -- # sync 00:10:00.242 09:00:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:00.242 09:00:36 -- nvmf/common.sh@119 -- # set +e 00:10:00.242 09:00:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:00.242 09:00:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:00.242 rmmod nvme_tcp 00:10:00.242 rmmod nvme_fabrics 00:10:00.242 rmmod nvme_keyring 00:10:00.243 09:00:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:00.243 09:00:36 -- nvmf/common.sh@123 -- # set -e 00:10:00.243 09:00:36 -- nvmf/common.sh@124 -- # return 0 00:10:00.243 09:00:36 -- nvmf/common.sh@477 -- # '[' -n 62721 ']' 00:10:00.243 09:00:36 -- nvmf/common.sh@478 -- # killprocess 62721 00:10:00.243 09:00:36 -- common/autotest_common.sh@936 -- # '[' -z 62721 ']' 00:10:00.243 09:00:36 -- common/autotest_common.sh@940 -- # kill -0 62721 00:10:00.243 09:00:36 -- common/autotest_common.sh@941 -- # uname 00:10:00.243 09:00:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:00.243 09:00:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62721 00:10:00.243 killing process with pid 62721 00:10:00.243 09:00:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:00.243 09:00:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:00.243 09:00:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62721' 00:10:00.243 09:00:36 -- common/autotest_common.sh@955 -- # kill 62721 00:10:00.243 09:00:36 -- common/autotest_common.sh@960 -- # wait 62721 00:10:00.243 09:00:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:00.243 09:00:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:00.243 09:00:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:00.243 09:00:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.243 09:00:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:00.243 09:00:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.243 09:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.243 09:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.243 09:00:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:00.243 00:10:00.243 real 0m24.418s 00:10:00.243 user 0m40.230s 00:10:00.243 sys 0m6.339s 00:10:00.243 ************************************ 00:10:00.243 END TEST nvmf_zcopy 00:10:00.243 ************************************ 00:10:00.243 09:00:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:00.243 09:00:36 -- common/autotest_common.sh@10 -- # set +x 00:10:00.243 09:00:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.243 09:00:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:00.243 09:00:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.243 09:00:36 -- common/autotest_common.sh@10 -- # set +x 00:10:00.243 ************************************ 00:10:00.243 START TEST nvmf_nmic 00:10:00.243 ************************************ 00:10:00.243 09:00:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.243 * Looking for test storage... 00:10:00.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.243 09:00:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:00.243 09:00:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:00.243 09:00:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:00.243 09:00:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:00.243 09:00:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:00.243 09:00:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:00.243 09:00:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:00.243 09:00:37 -- scripts/common.sh@335 -- # IFS=.-: 00:10:00.243 09:00:37 -- scripts/common.sh@335 -- # read -ra ver1 00:10:00.243 09:00:37 -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.243 09:00:37 -- scripts/common.sh@336 -- # read -ra ver2 00:10:00.503 09:00:37 -- scripts/common.sh@337 -- # local 'op=<' 00:10:00.503 09:00:37 -- scripts/common.sh@339 -- # ver1_l=2 00:10:00.503 09:00:37 -- scripts/common.sh@340 -- # ver2_l=1 00:10:00.503 09:00:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:00.503 09:00:37 -- scripts/common.sh@343 -- # case "$op" in 00:10:00.503 09:00:37 -- scripts/common.sh@344 -- # : 1 00:10:00.503 09:00:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:00.503 09:00:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.503 09:00:37 -- scripts/common.sh@364 -- # decimal 1 00:10:00.503 09:00:37 -- scripts/common.sh@352 -- # local d=1 00:10:00.503 09:00:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.503 09:00:37 -- scripts/common.sh@354 -- # echo 1 00:10:00.503 09:00:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:00.503 09:00:37 -- scripts/common.sh@365 -- # decimal 2 00:10:00.503 09:00:37 -- scripts/common.sh@352 -- # local d=2 00:10:00.503 09:00:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.503 09:00:37 -- scripts/common.sh@354 -- # echo 2 00:10:00.503 09:00:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:00.503 09:00:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:00.503 09:00:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:00.503 09:00:37 -- scripts/common.sh@367 -- # return 0 00:10:00.503 09:00:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.503 09:00:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:00.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.503 --rc genhtml_branch_coverage=1 00:10:00.503 --rc genhtml_function_coverage=1 00:10:00.503 --rc genhtml_legend=1 00:10:00.503 --rc geninfo_all_blocks=1 00:10:00.503 --rc geninfo_unexecuted_blocks=1 00:10:00.503 00:10:00.503 ' 00:10:00.503 09:00:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:00.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.504 --rc genhtml_branch_coverage=1 00:10:00.504 --rc genhtml_function_coverage=1 00:10:00.504 --rc genhtml_legend=1 00:10:00.504 --rc geninfo_all_blocks=1 00:10:00.504 --rc geninfo_unexecuted_blocks=1 00:10:00.504 00:10:00.504 ' 00:10:00.504 09:00:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.504 --rc genhtml_branch_coverage=1 00:10:00.504 --rc genhtml_function_coverage=1 00:10:00.504 --rc genhtml_legend=1 00:10:00.504 --rc geninfo_all_blocks=1 00:10:00.504 --rc geninfo_unexecuted_blocks=1 00:10:00.504 00:10:00.504 ' 00:10:00.504 09:00:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:00.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.504 --rc genhtml_branch_coverage=1 00:10:00.504 --rc genhtml_function_coverage=1 00:10:00.504 --rc genhtml_legend=1 00:10:00.504 --rc geninfo_all_blocks=1 00:10:00.504 --rc geninfo_unexecuted_blocks=1 00:10:00.504 00:10:00.504 ' 00:10:00.504 09:00:37 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.504 09:00:37 -- nvmf/common.sh@7 -- # uname -s 00:10:00.504 09:00:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.504 09:00:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.504 09:00:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.504 09:00:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.504 09:00:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.504 09:00:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.504 09:00:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.504 09:00:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.504 09:00:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.504 09:00:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:00.504 09:00:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:00.504 09:00:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.504 09:00:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.504 09:00:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.504 09:00:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.504 09:00:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.504 09:00:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.504 09:00:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.504 09:00:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.504 09:00:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.504 09:00:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.504 09:00:37 -- paths/export.sh@5 -- # export PATH 00:10:00.504 09:00:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.504 09:00:37 -- nvmf/common.sh@46 -- # : 0 00:10:00.504 09:00:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:00.504 09:00:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:00.504 09:00:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:00.504 09:00:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.504 09:00:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.504 09:00:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:00.504 09:00:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:00.504 09:00:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:00.504 09:00:37 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.504 09:00:37 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.504 09:00:37 -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.504 09:00:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:00.504 09:00:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.504 09:00:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:00.504 09:00:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:00.504 09:00:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:00.504 09:00:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.504 09:00:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.504 09:00:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.504 09:00:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:00.504 09:00:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:00.504 09:00:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.504 09:00:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.504 09:00:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:00.504 09:00:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:00.504 09:00:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.504 09:00:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.504 09:00:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.504 09:00:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.504 09:00:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.504 09:00:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.504 09:00:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.504 09:00:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.504 09:00:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:00.504 09:00:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:00.504 Cannot find device "nvmf_tgt_br" 00:10:00.504 09:00:37 -- nvmf/common.sh@154 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.504 Cannot find device "nvmf_tgt_br2" 00:10:00.504 09:00:37 -- nvmf/common.sh@155 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:00.504 09:00:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:00.504 Cannot find device "nvmf_tgt_br" 00:10:00.504 09:00:37 -- nvmf/common.sh@157 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:00.504 Cannot find device "nvmf_tgt_br2" 00:10:00.504 09:00:37 -- nvmf/common.sh@158 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:00.504 09:00:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:00.504 09:00:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.504 09:00:37 -- nvmf/common.sh@161 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.504 09:00:37 -- nvmf/common.sh@162 -- # true 00:10:00.504 09:00:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.505 09:00:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.505 09:00:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.505 09:00:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.505 09:00:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.505 09:00:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.505 09:00:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.505 09:00:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:00.505 09:00:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:00.764 09:00:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:00.764 09:00:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:00.764 09:00:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:00.764 09:00:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:00.764 09:00:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.764 09:00:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.764 09:00:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.764 09:00:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:00.764 09:00:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:00.764 09:00:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.764 09:00:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.764 09:00:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.764 09:00:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.764 09:00:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.764 09:00:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:00.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:00.764 00:10:00.764 --- 10.0.0.2 ping statistics --- 00:10:00.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.764 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:00.764 09:00:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:00.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:00.764 00:10:00.764 --- 10.0.0.3 ping statistics --- 00:10:00.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.764 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:00.764 09:00:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:00.764 00:10:00.764 --- 10.0.0.1 ping statistics --- 00:10:00.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.764 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:00.764 09:00:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.764 09:00:37 -- nvmf/common.sh@421 -- # return 0 00:10:00.764 09:00:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:00.764 09:00:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.764 09:00:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:00.764 09:00:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:00.764 09:00:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.764 09:00:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:00.764 09:00:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:00.764 09:00:37 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:00.764 09:00:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:00.764 09:00:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.764 09:00:37 -- common/autotest_common.sh@10 -- # set +x 00:10:00.764 09:00:37 -- nvmf/common.sh@469 -- # nvmfpid=63204 00:10:00.764 09:00:37 -- nvmf/common.sh@470 -- # waitforlisten 63204 00:10:00.764 09:00:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.764 09:00:37 -- common/autotest_common.sh@829 -- # '[' -z 63204 ']' 00:10:00.764 09:00:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.764 09:00:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.764 09:00:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.764 09:00:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.764 09:00:37 -- common/autotest_common.sh@10 -- # set +x 00:10:00.764 [2024-11-17 09:00:37.615241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:00.764 [2024-11-17 09:00:37.615336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.023 [2024-11-17 09:00:37.750019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.023 [2024-11-17 09:00:37.803352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:01.023 [2024-11-17 09:00:37.803788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.023 [2024-11-17 09:00:37.803911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.023 [2024-11-17 09:00:37.804097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.023 [2024-11-17 09:00:37.804324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.023 [2024-11-17 09:00:37.804480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.023 [2024-11-17 09:00:37.804558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.023 [2024-11-17 09:00:37.804559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.000 09:00:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.000 09:00:38 -- common/autotest_common.sh@862 -- # return 0 00:10:02.000 09:00:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:02.000 09:00:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.000 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.000 09:00:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.000 09:00:38 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.000 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.000 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 [2024-11-17 09:00:38.670430] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 Malloc0 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 [2024-11-17 09:00:38.731329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.001 test case1: single bdev can't be used in multiple subsystems 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:02.001 09:00:38 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@28 -- # nmic_status=0 00:10:02.001 09:00:38 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 [2024-11-17 09:00:38.755157] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:02.001 [2024-11-17 09:00:38.755191] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:02.001 [2024-11-17 09:00:38.755218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.001 request: 00:10:02.001 { 00:10:02.001 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:02.001 "namespace": { 00:10:02.001 "bdev_name": "Malloc0" 00:10:02.001 }, 00:10:02.001 "method": "nvmf_subsystem_add_ns", 00:10:02.001 "req_id": 1 00:10:02.001 } 00:10:02.001 Got JSON-RPC error response 00:10:02.001 response: 00:10:02.001 { 00:10:02.001 "code": -32602, 00:10:02.001 "message": "Invalid parameters" 00:10:02.001 } 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@29 -- # nmic_status=1 00:10:02.001 09:00:38 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:02.001 09:00:38 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:02.001 Adding namespace failed - expected result. 00:10:02.001 09:00:38 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:02.001 test case2: host connect to nvmf target in multiple paths 00:10:02.001 09:00:38 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:02.001 09:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 [2024-11-17 09:00:38.771300] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:02.001 09:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:00:38 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.001 09:00:38 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:02.260 09:00:39 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.260 09:00:39 -- common/autotest_common.sh@1187 -- # local i=0 00:10:02.260 09:00:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.260 09:00:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:02.260 09:00:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:04.162 09:00:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:04.162 09:00:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:04.162 09:00:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.162 09:00:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:04.162 09:00:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.162 09:00:41 -- common/autotest_common.sh@1197 -- # return 0 00:10:04.162 09:00:41 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:04.162 [global] 00:10:04.162 thread=1 00:10:04.162 invalidate=1 00:10:04.162 rw=write 00:10:04.162 time_based=1 00:10:04.162 runtime=1 00:10:04.162 ioengine=libaio 00:10:04.162 direct=1 00:10:04.162 bs=4096 00:10:04.162 iodepth=1 00:10:04.162 norandommap=0 00:10:04.162 numjobs=1 00:10:04.162 00:10:04.162 verify_dump=1 00:10:04.162 verify_backlog=512 00:10:04.162 verify_state_save=0 00:10:04.162 do_verify=1 00:10:04.162 verify=crc32c-intel 00:10:04.162 [job0] 00:10:04.162 filename=/dev/nvme0n1 00:10:04.421 Could not set queue depth (nvme0n1) 00:10:04.421 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.421 fio-3.35 00:10:04.421 Starting 1 thread 00:10:05.799 00:10:05.799 job0: (groupid=0, jobs=1): err= 0: pid=63296: Sun Nov 17 09:00:42 2024 00:10:05.799 read: IOPS=3058, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1001msec) 00:10:05.799 slat (nsec): min=11129, max=62229, avg=13637.32, stdev=4376.70 00:10:05.799 clat (usec): min=134, max=438, avg=180.83, stdev=23.51 00:10:05.799 lat (usec): min=145, max=477, avg=194.47, stdev=24.33 00:10:05.799 clat percentiles (usec): 00:10:05.799 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:10:05.799 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:05.799 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 223], 00:10:05.800 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 273], 99.95th=[ 314], 00:10:05.800 | 99.99th=[ 441] 00:10:05.800 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:05.800 slat (usec): min=16, max=104, avg=21.42, stdev= 6.68 00:10:05.800 clat (usec): min=79, max=232, avg=107.07, stdev=17.51 00:10:05.800 lat (usec): min=96, max=336, avg=128.48, stdev=19.97 00:10:05.800 clat percentiles (usec): 00:10:05.800 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 94], 00:10:05.800 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 106], 00:10:05.800 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 133], 95.00th=[ 143], 00:10:05.800 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 198], 00:10:05.800 | 99.99th=[ 233] 00:10:05.800 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:05.800 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:05.800 lat (usec) : 100=20.43%, 250=79.38%, 500=0.20% 00:10:05.800 cpu : usr=2.20%, sys=8.50%, ctx=6134, majf=0, minf=5 00:10:05.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.800 issued rwts: total=3062,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.800 00:10:05.800 Run status group 0 (all jobs): 00:10:05.800 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=12.0MiB (12.5MB), run=1001-1001msec 00:10:05.800 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:05.800 00:10:05.800 Disk stats (read/write): 00:10:05.800 nvme0n1: ios=2610/3060, merge=0/0, ticks=498/351, in_queue=849, util=91.48% 00:10:05.800 09:00:42 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:05.800 09:00:42 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.800 09:00:42 -- common/autotest_common.sh@1208 -- # local i=0 00:10:05.800 09:00:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:05.800 09:00:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.800 09:00:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:05.800 09:00:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.800 09:00:42 -- common/autotest_common.sh@1220 -- # return 0 00:10:05.800 09:00:42 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:05.800 09:00:42 -- target/nmic.sh@53 -- # nvmftestfini 00:10:05.800 09:00:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:05.800 09:00:42 -- nvmf/common.sh@116 -- # sync 00:10:05.800 09:00:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:05.800 09:00:42 -- nvmf/common.sh@119 -- # set +e 00:10:05.800 09:00:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:05.800 09:00:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:05.800 rmmod nvme_tcp 00:10:05.800 rmmod nvme_fabrics 00:10:05.800 rmmod nvme_keyring 00:10:05.800 09:00:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:05.800 09:00:42 -- nvmf/common.sh@123 -- # set -e 00:10:05.800 09:00:42 -- nvmf/common.sh@124 -- # return 0 00:10:05.800 09:00:42 -- nvmf/common.sh@477 -- # '[' -n 63204 ']' 00:10:05.800 09:00:42 -- nvmf/common.sh@478 -- # killprocess 63204 00:10:05.800 09:00:42 -- common/autotest_common.sh@936 -- # '[' -z 63204 ']' 00:10:05.800 09:00:42 -- common/autotest_common.sh@940 -- # kill -0 63204 00:10:05.800 09:00:42 -- common/autotest_common.sh@941 -- # uname 00:10:05.800 09:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.800 09:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63204 00:10:05.800 killing process with pid 63204 00:10:05.800 09:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:05.800 09:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:05.800 09:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63204' 00:10:05.800 09:00:42 -- common/autotest_common.sh@955 -- # kill 63204 00:10:05.800 09:00:42 -- common/autotest_common.sh@960 -- # wait 63204 00:10:06.060 09:00:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:06.060 09:00:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:06.060 09:00:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:06.060 09:00:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.060 09:00:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:06.060 09:00:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.060 09:00:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.060 09:00:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.060 09:00:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:06.060 00:10:06.060 real 0m5.808s 00:10:06.060 user 0m18.593s 00:10:06.060 sys 0m2.274s 00:10:06.060 ************************************ 00:10:06.060 END TEST nvmf_nmic 00:10:06.060 ************************************ 00:10:06.060 09:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:06.060 09:00:42 -- common/autotest_common.sh@10 -- # set +x 00:10:06.060 09:00:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.060 09:00:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:06.060 09:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.060 09:00:42 -- common/autotest_common.sh@10 -- # set +x 00:10:06.060 ************************************ 00:10:06.060 START TEST nvmf_fio_target 00:10:06.060 ************************************ 00:10:06.060 09:00:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.060 * Looking for test storage... 00:10:06.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.060 09:00:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:06.060 09:00:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:06.060 09:00:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:06.321 09:00:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:06.321 09:00:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:06.321 09:00:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:06.321 09:00:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:06.321 09:00:43 -- scripts/common.sh@335 -- # IFS=.-: 00:10:06.321 09:00:43 -- scripts/common.sh@335 -- # read -ra ver1 00:10:06.321 09:00:43 -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.321 09:00:43 -- scripts/common.sh@336 -- # read -ra ver2 00:10:06.321 09:00:43 -- scripts/common.sh@337 -- # local 'op=<' 00:10:06.321 09:00:43 -- scripts/common.sh@339 -- # ver1_l=2 00:10:06.321 09:00:43 -- scripts/common.sh@340 -- # ver2_l=1 00:10:06.321 09:00:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:06.321 09:00:43 -- scripts/common.sh@343 -- # case "$op" in 00:10:06.321 09:00:43 -- scripts/common.sh@344 -- # : 1 00:10:06.321 09:00:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:06.321 09:00:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.321 09:00:43 -- scripts/common.sh@364 -- # decimal 1 00:10:06.321 09:00:43 -- scripts/common.sh@352 -- # local d=1 00:10:06.321 09:00:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.321 09:00:43 -- scripts/common.sh@354 -- # echo 1 00:10:06.321 09:00:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:06.321 09:00:43 -- scripts/common.sh@365 -- # decimal 2 00:10:06.321 09:00:43 -- scripts/common.sh@352 -- # local d=2 00:10:06.321 09:00:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.321 09:00:43 -- scripts/common.sh@354 -- # echo 2 00:10:06.321 09:00:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:06.321 09:00:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:06.321 09:00:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:06.321 09:00:43 -- scripts/common.sh@367 -- # return 0 00:10:06.321 09:00:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.321 09:00:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.321 --rc genhtml_branch_coverage=1 00:10:06.321 --rc genhtml_function_coverage=1 00:10:06.321 --rc genhtml_legend=1 00:10:06.321 --rc geninfo_all_blocks=1 00:10:06.321 --rc geninfo_unexecuted_blocks=1 00:10:06.321 00:10:06.321 ' 00:10:06.321 09:00:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.321 --rc genhtml_branch_coverage=1 00:10:06.321 --rc genhtml_function_coverage=1 00:10:06.321 --rc genhtml_legend=1 00:10:06.321 --rc geninfo_all_blocks=1 00:10:06.321 --rc geninfo_unexecuted_blocks=1 00:10:06.321 00:10:06.321 ' 00:10:06.321 09:00:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.321 --rc genhtml_branch_coverage=1 00:10:06.321 --rc genhtml_function_coverage=1 00:10:06.321 --rc genhtml_legend=1 00:10:06.321 --rc geninfo_all_blocks=1 00:10:06.321 --rc geninfo_unexecuted_blocks=1 00:10:06.321 00:10:06.321 ' 00:10:06.321 09:00:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.321 --rc genhtml_branch_coverage=1 00:10:06.321 --rc genhtml_function_coverage=1 00:10:06.322 --rc genhtml_legend=1 00:10:06.322 --rc geninfo_all_blocks=1 00:10:06.322 --rc geninfo_unexecuted_blocks=1 00:10:06.322 00:10:06.322 ' 00:10:06.322 09:00:43 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.322 09:00:43 -- nvmf/common.sh@7 -- # uname -s 00:10:06.322 09:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.322 09:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.322 09:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.322 09:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.322 09:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.322 09:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.322 09:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.322 09:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.322 09:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.322 09:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:06.322 09:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:06.322 09:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.322 09:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.322 09:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.322 09:00:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.322 09:00:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.322 09:00:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.322 09:00:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.322 09:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.322 09:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.322 09:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.322 09:00:43 -- paths/export.sh@5 -- # export PATH 00:10:06.322 09:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.322 09:00:43 -- nvmf/common.sh@46 -- # : 0 00:10:06.322 09:00:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:06.322 09:00:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:06.322 09:00:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:06.322 09:00:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.322 09:00:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.322 09:00:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:06.322 09:00:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:06.322 09:00:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:06.322 09:00:43 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.322 09:00:43 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.322 09:00:43 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.322 09:00:43 -- target/fio.sh@16 -- # nvmftestinit 00:10:06.322 09:00:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:06.322 09:00:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.322 09:00:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:06.322 09:00:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:06.322 09:00:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:06.322 09:00:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.322 09:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.322 09:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.322 09:00:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:06.322 09:00:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:06.322 09:00:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.322 09:00:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.322 09:00:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:06.322 09:00:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:06.322 09:00:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.322 09:00:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.322 09:00:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.323 09:00:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.323 09:00:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.323 09:00:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.323 09:00:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.323 09:00:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.323 09:00:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:06.323 09:00:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:06.323 Cannot find device "nvmf_tgt_br" 00:10:06.323 09:00:43 -- nvmf/common.sh@154 -- # true 00:10:06.323 09:00:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.323 Cannot find device "nvmf_tgt_br2" 00:10:06.323 09:00:43 -- nvmf/common.sh@155 -- # true 00:10:06.323 09:00:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:06.323 09:00:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:06.323 Cannot find device "nvmf_tgt_br" 00:10:06.323 09:00:43 -- nvmf/common.sh@157 -- # true 00:10:06.323 09:00:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:06.323 Cannot find device "nvmf_tgt_br2" 00:10:06.323 09:00:43 -- nvmf/common.sh@158 -- # true 00:10:06.323 09:00:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:06.323 09:00:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:06.323 09:00:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.323 09:00:43 -- nvmf/common.sh@161 -- # true 00:10:06.323 09:00:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.582 09:00:43 -- nvmf/common.sh@162 -- # true 00:10:06.582 09:00:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.582 09:00:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.582 09:00:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.583 09:00:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.583 09:00:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.583 09:00:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.583 09:00:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.583 09:00:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.583 09:00:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.583 09:00:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:06.583 09:00:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:06.583 09:00:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:06.583 09:00:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:06.583 09:00:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.583 09:00:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.583 09:00:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.583 09:00:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:06.583 09:00:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:06.583 09:00:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.583 09:00:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.583 09:00:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.583 09:00:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.583 09:00:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.583 09:00:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:06.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:06.583 00:10:06.583 --- 10.0.0.2 ping statistics --- 00:10:06.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.583 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:06.583 09:00:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:06.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:10:06.583 00:10:06.583 --- 10.0.0.3 ping statistics --- 00:10:06.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.583 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:06.583 09:00:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:06.583 00:10:06.583 --- 10.0.0.1 ping statistics --- 00:10:06.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.583 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:06.583 09:00:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.583 09:00:43 -- nvmf/common.sh@421 -- # return 0 00:10:06.583 09:00:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:06.583 09:00:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.583 09:00:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:06.583 09:00:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:06.583 09:00:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.583 09:00:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:06.583 09:00:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:06.583 09:00:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:06.583 09:00:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:06.583 09:00:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.583 09:00:43 -- common/autotest_common.sh@10 -- # set +x 00:10:06.583 09:00:43 -- nvmf/common.sh@469 -- # nvmfpid=63480 00:10:06.583 09:00:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.583 09:00:43 -- nvmf/common.sh@470 -- # waitforlisten 63480 00:10:06.583 09:00:43 -- common/autotest_common.sh@829 -- # '[' -z 63480 ']' 00:10:06.583 09:00:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.583 09:00:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.583 09:00:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.583 09:00:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.583 09:00:43 -- common/autotest_common.sh@10 -- # set +x 00:10:06.583 [2024-11-17 09:00:43.495935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.583 [2024-11-17 09:00:43.496023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.843 [2024-11-17 09:00:43.636678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.843 [2024-11-17 09:00:43.688999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:06.843 [2024-11-17 09:00:43.689408] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.843 [2024-11-17 09:00:43.689564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.843 [2024-11-17 09:00:43.689731] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.843 [2024-11-17 09:00:43.690035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.843 [2024-11-17 09:00:43.690166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.843 [2024-11-17 09:00:43.690248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.843 [2024-11-17 09:00:43.690248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.779 09:00:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.780 09:00:44 -- common/autotest_common.sh@862 -- # return 0 00:10:07.780 09:00:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:07.780 09:00:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.780 09:00:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.780 09:00:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.780 09:00:44 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:08.039 [2024-11-17 09:00:44.747506] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.039 09:00:44 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.326 09:00:45 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:08.326 09:00:45 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.585 09:00:45 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:08.585 09:00:45 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.845 09:00:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:08.845 09:00:45 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.104 09:00:45 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:09.104 09:00:45 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:09.363 09:00:46 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.623 09:00:46 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:09.623 09:00:46 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.882 09:00:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:09.882 09:00:46 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.141 09:00:46 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:10.141 09:00:46 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:10.400 09:00:47 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.659 09:00:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:10.659 09:00:47 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.918 09:00:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:10.918 09:00:47 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.254 09:00:47 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.254 [2024-11-17 09:00:48.097767] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.254 09:00:48 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:11.531 09:00:48 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:11.789 09:00:48 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.048 09:00:48 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:12.048 09:00:48 -- common/autotest_common.sh@1187 -- # local i=0 00:10:12.048 09:00:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.048 09:00:48 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:12.048 09:00:48 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:12.048 09:00:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:13.952 09:00:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:13.952 09:00:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:13.952 09:00:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.952 09:00:50 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:13.952 09:00:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.952 09:00:50 -- common/autotest_common.sh@1197 -- # return 0 00:10:13.952 09:00:50 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.952 [global] 00:10:13.952 thread=1 00:10:13.952 invalidate=1 00:10:13.952 rw=write 00:10:13.952 time_based=1 00:10:13.952 runtime=1 00:10:13.952 ioengine=libaio 00:10:13.952 direct=1 00:10:13.952 bs=4096 00:10:13.952 iodepth=1 00:10:13.952 norandommap=0 00:10:13.952 numjobs=1 00:10:13.952 00:10:13.952 verify_dump=1 00:10:13.952 verify_backlog=512 00:10:13.952 verify_state_save=0 00:10:13.952 do_verify=1 00:10:13.952 verify=crc32c-intel 00:10:13.952 [job0] 00:10:13.952 filename=/dev/nvme0n1 00:10:13.952 [job1] 00:10:13.952 filename=/dev/nvme0n2 00:10:13.952 [job2] 00:10:13.952 filename=/dev/nvme0n3 00:10:13.952 [job3] 00:10:13.952 filename=/dev/nvme0n4 00:10:13.952 Could not set queue depth (nvme0n1) 00:10:13.952 Could not set queue depth (nvme0n2) 00:10:13.952 Could not set queue depth (nvme0n3) 00:10:13.953 Could not set queue depth (nvme0n4) 00:10:14.212 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.212 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.212 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.212 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.212 fio-3.35 00:10:14.212 Starting 4 threads 00:10:15.593 00:10:15.593 job0: (groupid=0, jobs=1): err= 0: pid=63664: Sun Nov 17 09:00:52 2024 00:10:15.593 read: IOPS=1936, BW=7744KiB/s (7930kB/s)(7752KiB/1001msec) 00:10:15.593 slat (nsec): min=11740, max=39970, avg=14162.33, stdev=2953.28 00:10:15.593 clat (usec): min=153, max=419, avg=259.98, stdev=18.89 00:10:15.593 lat (usec): min=168, max=435, avg=274.14, stdev=18.89 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:10:15.593 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:10:15.593 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:10:15.593 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 408], 99.95th=[ 420], 00:10:15.593 | 99.99th=[ 420] 00:10:15.593 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:15.593 slat (usec): min=17, max=111, avg=22.22, stdev= 5.28 00:10:15.593 clat (usec): min=97, max=1217, avg=203.46, stdev=33.33 00:10:15.593 lat (usec): min=119, max=1261, avg=225.68, stdev=34.72 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 130], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:10:15.593 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:10:15.593 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 239], 00:10:15.593 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 379], 99.95th=[ 379], 00:10:15.593 | 99.99th=[ 1221] 00:10:15.593 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:15.593 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:15.593 lat (usec) : 100=0.03%, 250=63.50%, 500=36.45% 00:10:15.593 lat (msec) : 2=0.03% 00:10:15.593 cpu : usr=1.40%, sys=5.90%, ctx=3986, majf=0, minf=9 00:10:15.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 issued rwts: total=1938,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.593 job1: (groupid=0, jobs=1): err= 0: pid=63665: Sun Nov 17 09:00:52 2024 00:10:15.593 read: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:10:15.593 slat (nsec): min=11100, max=42882, avg=13381.89, stdev=2717.47 00:10:15.593 clat (usec): min=131, max=234, avg=170.65, stdev=13.89 00:10:15.593 lat (usec): min=143, max=246, avg=184.03, stdev=14.02 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:15.593 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:15.593 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:10:15.593 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 223], 99.95th=[ 223], 00:10:15.593 | 99.99th=[ 235] 00:10:15.593 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:15.593 slat (nsec): min=16877, max=80901, avg=20708.87, stdev=4359.01 00:10:15.593 clat (usec): min=94, max=3503, avg=128.63, stdev=68.82 00:10:15.593 lat (usec): min=116, max=3584, avg=149.34, stdev=70.09 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:10:15.593 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:10:15.593 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:10:15.593 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 627], 99.95th=[ 1516], 00:10:15.593 | 99.99th=[ 3490] 00:10:15.593 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.593 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.593 lat (usec) : 100=0.10%, 250=99.78%, 500=0.05%, 750=0.03% 00:10:15.593 lat (msec) : 2=0.02%, 4=0.02% 00:10:15.593 cpu : usr=2.10%, sys=8.20%, ctx=5961, majf=0, minf=15 00:10:15.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 issued rwts: total=2889,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.593 job2: (groupid=0, jobs=1): err= 0: pid=63666: Sun Nov 17 09:00:52 2024 00:10:15.593 read: IOPS=2569, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:15.593 slat (usec): min=11, max=154, avg=15.63, stdev= 6.07 00:10:15.593 clat (usec): min=137, max=632, avg=176.57, stdev=18.46 00:10:15.593 lat (usec): min=149, max=653, avg=192.20, stdev=20.01 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:15.593 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:15.593 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:10:15.593 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 310], 99.95th=[ 388], 00:10:15.593 | 99.99th=[ 635] 00:10:15.593 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:15.593 slat (usec): min=14, max=100, avg=22.97, stdev= 6.63 00:10:15.593 clat (usec): min=100, max=3088, avg=138.50, stdev=57.64 00:10:15.593 lat (usec): min=120, max=3123, avg=161.47, stdev=58.34 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 126], 00:10:15.593 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:10:15.593 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:10:15.593 | 99.00th=[ 184], 99.50th=[ 258], 99.90th=[ 486], 99.95th=[ 652], 00:10:15.593 | 99.99th=[ 3097] 00:10:15.593 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.593 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.593 lat (usec) : 250=99.57%, 500=0.35%, 750=0.05% 00:10:15.593 lat (msec) : 4=0.02% 00:10:15.593 cpu : usr=2.50%, sys=8.40%, ctx=5644, majf=0, minf=10 00:10:15.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.593 issued rwts: total=2572,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.593 job3: (groupid=0, jobs=1): err= 0: pid=63667: Sun Nov 17 09:00:52 2024 00:10:15.593 read: IOPS=1937, BW=7748KiB/s (7934kB/s)(7756KiB/1001msec) 00:10:15.593 slat (nsec): min=11959, max=54705, avg=14996.50, stdev=3533.84 00:10:15.593 clat (usec): min=212, max=1112, avg=261.30, stdev=30.62 00:10:15.593 lat (usec): min=233, max=1127, avg=276.30, stdev=30.76 00:10:15.593 clat percentiles (usec): 00:10:15.593 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:10:15.593 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:10:15.594 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:15.594 | 99.00th=[ 338], 99.50th=[ 404], 99.90th=[ 515], 99.95th=[ 1106], 00:10:15.594 | 99.99th=[ 1106] 00:10:15.594 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:15.594 slat (usec): min=18, max=116, avg=22.60, stdev= 5.15 00:10:15.594 clat (usec): min=105, max=684, avg=200.60, stdev=23.35 00:10:15.594 lat (usec): min=128, max=718, avg=223.20, stdev=24.35 00:10:15.594 clat percentiles (usec): 00:10:15.594 | 1.00th=[ 131], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:10:15.594 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:15.594 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 237], 00:10:15.594 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 326], 99.95th=[ 334], 00:10:15.594 | 99.99th=[ 685] 00:10:15.594 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:15.594 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:15.594 lat (usec) : 250=65.39%, 500=34.51%, 750=0.08% 00:10:15.594 lat (msec) : 2=0.03% 00:10:15.594 cpu : usr=1.30%, sys=6.20%, ctx=3988, majf=0, minf=5 00:10:15.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.594 issued rwts: total=1939,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.594 00:10:15.594 Run status group 0 (all jobs): 00:10:15.594 READ: bw=36.4MiB/s (38.2MB/s), 7744KiB/s-11.3MiB/s (7930kB/s-11.8MB/s), io=36.5MiB (38.2MB), run=1001-1001msec 00:10:15.594 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:15.594 00:10:15.594 Disk stats (read/write): 00:10:15.594 nvme0n1: ios=1586/1924, merge=0/0, ticks=422/412, in_queue=834, util=88.38% 00:10:15.594 nvme0n2: ios=2588/2560, merge=0/0, ticks=458/353, in_queue=811, util=88.75% 00:10:15.594 nvme0n3: ios=2260/2560, merge=0/0, ticks=414/376, in_queue=790, util=89.03% 00:10:15.594 nvme0n4: ios=1536/1933, merge=0/0, ticks=402/400, in_queue=802, util=89.68% 00:10:15.594 09:00:52 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:15.594 [global] 00:10:15.594 thread=1 00:10:15.594 invalidate=1 00:10:15.594 rw=randwrite 00:10:15.594 time_based=1 00:10:15.594 runtime=1 00:10:15.594 ioengine=libaio 00:10:15.594 direct=1 00:10:15.594 bs=4096 00:10:15.594 iodepth=1 00:10:15.594 norandommap=0 00:10:15.594 numjobs=1 00:10:15.594 00:10:15.594 verify_dump=1 00:10:15.594 verify_backlog=512 00:10:15.594 verify_state_save=0 00:10:15.594 do_verify=1 00:10:15.594 verify=crc32c-intel 00:10:15.594 [job0] 00:10:15.594 filename=/dev/nvme0n1 00:10:15.594 [job1] 00:10:15.594 filename=/dev/nvme0n2 00:10:15.594 [job2] 00:10:15.594 filename=/dev/nvme0n3 00:10:15.594 [job3] 00:10:15.594 filename=/dev/nvme0n4 00:10:15.594 Could not set queue depth (nvme0n1) 00:10:15.594 Could not set queue depth (nvme0n2) 00:10:15.594 Could not set queue depth (nvme0n3) 00:10:15.594 Could not set queue depth (nvme0n4) 00:10:15.594 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.594 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.594 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.594 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.594 fio-3.35 00:10:15.594 Starting 4 threads 00:10:16.972 00:10:16.972 job0: (groupid=0, jobs=1): err= 0: pid=63726: Sun Nov 17 09:00:53 2024 00:10:16.972 read: IOPS=2024, BW=8100KiB/s (8294kB/s)(8108KiB/1001msec) 00:10:16.972 slat (usec): min=11, max=223, avg=13.74, stdev= 5.37 00:10:16.972 clat (usec): min=132, max=919, avg=254.45, stdev=32.23 00:10:16.972 lat (usec): min=145, max=931, avg=268.19, stdev=32.83 00:10:16.972 clat percentiles (usec): 00:10:16.972 | 1.00th=[ 161], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:10:16.972 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:10:16.972 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:10:16.972 | 99.00th=[ 338], 99.50th=[ 465], 99.90th=[ 523], 99.95th=[ 619], 00:10:16.972 | 99.99th=[ 922] 00:10:16.972 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:16.972 slat (nsec): min=18463, max=97757, avg=21305.44, stdev=4790.45 00:10:16.972 clat (usec): min=101, max=300, avg=198.05, stdev=19.35 00:10:16.972 lat (usec): min=120, max=365, avg=219.35, stdev=20.88 00:10:16.972 clat percentiles (usec): 00:10:16.972 | 1.00th=[ 137], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:10:16.972 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:16.972 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:10:16.972 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 281], 00:10:16.972 | 99.99th=[ 302] 00:10:16.972 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:16.972 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:16.972 lat (usec) : 250=71.80%, 500=28.05%, 750=0.12%, 1000=0.02% 00:10:16.972 cpu : usr=0.80%, sys=6.50%, ctx=4077, majf=0, minf=13 00:10:16.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 issued rwts: total=2027,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.973 job1: (groupid=0, jobs=1): err= 0: pid=63727: Sun Nov 17 09:00:53 2024 00:10:16.973 read: IOPS=2952, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:10:16.973 slat (nsec): min=10918, max=58334, avg=13079.15, stdev=2426.12 00:10:16.973 clat (usec): min=130, max=2439, avg=170.29, stdev=88.62 00:10:16.973 lat (usec): min=143, max=2470, avg=183.37, stdev=89.39 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:16.973 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:16.973 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:10:16.973 | 99.00th=[ 208], 99.50th=[ 314], 99.90th=[ 2114], 99.95th=[ 2376], 00:10:16.973 | 99.99th=[ 2442] 00:10:16.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:16.973 slat (nsec): min=14169, max=89028, avg=19369.76, stdev=2786.21 00:10:16.973 clat (usec): min=96, max=192, avg=126.74, stdev=11.50 00:10:16.973 lat (usec): min=113, max=281, avg=146.11, stdev=11.76 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 118], 00:10:16.973 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:10:16.973 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:10:16.973 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 194], 00:10:16.973 | 99.99th=[ 194] 00:10:16.973 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:16.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:16.973 lat (usec) : 100=0.08%, 250=99.59%, 500=0.15%, 750=0.05%, 1000=0.03% 00:10:16.973 lat (msec) : 2=0.03%, 4=0.07% 00:10:16.973 cpu : usr=2.10%, sys=7.80%, ctx=6030, majf=0, minf=13 00:10:16.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 issued rwts: total=2955,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.973 job2: (groupid=0, jobs=1): err= 0: pid=63729: Sun Nov 17 09:00:53 2024 00:10:16.973 read: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:10:16.973 slat (nsec): min=11711, max=36328, avg=14535.52, stdev=2057.61 00:10:16.973 clat (usec): min=141, max=558, avg=175.85, stdev=16.93 00:10:16.973 lat (usec): min=154, max=584, avg=190.39, stdev=17.38 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:16.973 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:16.973 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:10:16.973 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 306], 99.95th=[ 490], 00:10:16.973 | 99.99th=[ 562] 00:10:16.973 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:16.973 slat (usec): min=15, max=100, avg=21.84, stdev= 4.22 00:10:16.973 clat (usec): min=101, max=596, avg=135.23, stdev=15.90 00:10:16.973 lat (usec): min=121, max=628, avg=157.08, stdev=16.89 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:10:16.973 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:16.973 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:10:16.973 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 281], 99.95th=[ 351], 00:10:16.973 | 99.99th=[ 594] 00:10:16.973 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:16.973 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:16.973 lat (usec) : 250=99.83%, 500=0.14%, 750=0.03% 00:10:16.973 cpu : usr=2.10%, sys=8.50%, ctx=5744, majf=0, minf=7 00:10:16.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 issued rwts: total=2667,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.973 job3: (groupid=0, jobs=1): err= 0: pid=63730: Sun Nov 17 09:00:53 2024 00:10:16.973 read: IOPS=2006, BW=8028KiB/s (8221kB/s)(8036KiB/1001msec) 00:10:16.973 slat (nsec): min=11769, max=36776, avg=13285.50, stdev=2048.66 00:10:16.973 clat (usec): min=159, max=2733, avg=256.63, stdev=72.74 00:10:16.973 lat (usec): min=172, max=2758, avg=269.91, stdev=73.01 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:10:16.973 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:16.973 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:16.973 | 99.00th=[ 334], 99.50th=[ 379], 99.90th=[ 506], 99.95th=[ 2114], 00:10:16.973 | 99.99th=[ 2737] 00:10:16.973 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:16.973 slat (nsec): min=16160, max=78729, avg=20778.96, stdev=4674.73 00:10:16.973 clat (usec): min=114, max=437, avg=199.50, stdev=21.99 00:10:16.973 lat (usec): min=133, max=473, avg=220.28, stdev=23.68 00:10:16.973 clat percentiles (usec): 00:10:16.973 | 1.00th=[ 145], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:10:16.973 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:16.973 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:10:16.973 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 392], 00:10:16.973 | 99.99th=[ 437] 00:10:16.973 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:16.973 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:16.973 lat (usec) : 250=70.54%, 500=29.38%, 750=0.02% 00:10:16.973 lat (msec) : 4=0.05% 00:10:16.973 cpu : usr=1.20%, sys=5.80%, ctx=4062, majf=0, minf=15 00:10:16.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.973 issued rwts: total=2009,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.973 00:10:16.973 Run status group 0 (all jobs): 00:10:16.973 READ: bw=37.7MiB/s (39.5MB/s), 8028KiB/s-11.5MiB/s (8221kB/s-12.1MB/s), io=37.7MiB (39.6MB), run=1001-1001msec 00:10:16.973 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:16.973 00:10:16.973 Disk stats (read/write): 00:10:16.973 nvme0n1: ios=1586/2012, merge=0/0, ticks=405/414, in_queue=819, util=87.98% 00:10:16.973 nvme0n2: ios=2609/2585, merge=0/0, ticks=450/334, in_queue=784, util=87.99% 00:10:16.973 nvme0n3: ios=2331/2560, merge=0/0, ticks=414/371, in_queue=785, util=89.11% 00:10:16.973 nvme0n4: ios=1536/1984, merge=0/0, ticks=391/413, in_queue=804, util=89.76% 00:10:16.973 09:00:53 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:16.973 [global] 00:10:16.973 thread=1 00:10:16.973 invalidate=1 00:10:16.973 rw=write 00:10:16.973 time_based=1 00:10:16.973 runtime=1 00:10:16.973 ioengine=libaio 00:10:16.973 direct=1 00:10:16.973 bs=4096 00:10:16.973 iodepth=128 00:10:16.973 norandommap=0 00:10:16.973 numjobs=1 00:10:16.973 00:10:16.973 verify_dump=1 00:10:16.973 verify_backlog=512 00:10:16.973 verify_state_save=0 00:10:16.973 do_verify=1 00:10:16.973 verify=crc32c-intel 00:10:16.973 [job0] 00:10:16.973 filename=/dev/nvme0n1 00:10:16.973 [job1] 00:10:16.973 filename=/dev/nvme0n2 00:10:16.973 [job2] 00:10:16.973 filename=/dev/nvme0n3 00:10:16.973 [job3] 00:10:16.973 filename=/dev/nvme0n4 00:10:16.973 Could not set queue depth (nvme0n1) 00:10:16.973 Could not set queue depth (nvme0n2) 00:10:16.973 Could not set queue depth (nvme0n3) 00:10:16.973 Could not set queue depth (nvme0n4) 00:10:16.973 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.973 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.973 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.973 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.973 fio-3.35 00:10:16.973 Starting 4 threads 00:10:18.351 00:10:18.351 job0: (groupid=0, jobs=1): err= 0: pid=63790: Sun Nov 17 09:00:54 2024 00:10:18.351 read: IOPS=2376, BW=9507KiB/s (9736kB/s)(9536KiB/1003msec) 00:10:18.351 slat (usec): min=5, max=8884, avg=188.46, stdev=829.77 00:10:18.351 clat (usec): min=674, max=42215, avg=22974.97, stdev=6098.66 00:10:18.351 lat (usec): min=5001, max=42536, avg=23163.44, stdev=6161.42 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[ 7046], 5.00th=[15533], 10.00th=[16909], 20.00th=[19268], 00:10:18.351 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[22676], 00:10:18.351 | 70.00th=[27132], 80.00th=[29754], 90.00th=[30802], 95.00th=[33817], 00:10:18.351 | 99.00th=[37487], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:10:18.351 | 99.99th=[42206] 00:10:18.351 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:18.351 slat (usec): min=14, max=6090, avg=207.65, stdev=820.44 00:10:18.351 clat (usec): min=12099, max=62234, avg=28098.39, stdev=13544.58 00:10:18.351 lat (usec): min=12122, max=62278, avg=28306.04, stdev=13638.25 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[12649], 5.00th=[14091], 10.00th=[14222], 20.00th=[16909], 00:10:18.351 | 30.00th=[17433], 40.00th=[19530], 50.00th=[21103], 60.00th=[28705], 00:10:18.351 | 70.00th=[36963], 80.00th=[40633], 90.00th=[49546], 95.00th=[53740], 00:10:18.351 | 99.00th=[59507], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:10:18.351 | 99.99th=[62129] 00:10:18.351 bw ( KiB/s): min= 8192, max=12288, per=15.56%, avg=10240.00, stdev=2896.31, samples=2 00:10:18.351 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:18.351 lat (usec) : 750=0.02% 00:10:18.351 lat (msec) : 10=0.85%, 20=39.99%, 50=54.23%, 100=4.92% 00:10:18.351 cpu : usr=2.50%, sys=7.58%, ctx=277, majf=0, minf=7 00:10:18.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:18.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.351 issued rwts: total=2384,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.351 job1: (groupid=0, jobs=1): err= 0: pid=63791: Sun Nov 17 09:00:54 2024 00:10:18.351 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:18.351 slat (usec): min=5, max=3105, avg=83.55, stdev=343.02 00:10:18.351 clat (usec): min=8156, max=14205, avg=11128.05, stdev=913.61 00:10:18.351 lat (usec): min=8182, max=15019, avg=11211.60, stdev=940.03 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10552], 00:10:18.351 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:18.351 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12387], 95.00th=[12780], 00:10:18.351 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[14091], 00:10:18.351 | 99.99th=[14222] 00:10:18.351 write: IOPS=5734, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1002msec); 0 zone resets 00:10:18.351 slat (usec): min=11, max=3111, avg=85.01, stdev=388.61 00:10:18.351 clat (usec): min=164, max=14846, avg=11127.16, stdev=1060.80 00:10:18.351 lat (usec): min=2958, max=14908, avg=11212.17, stdev=1117.48 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[ 6849], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:10:18.351 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:10:18.351 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12518], 00:10:18.351 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14615], 99.95th=[14746], 00:10:18.351 | 99.99th=[14877] 00:10:18.351 bw ( KiB/s): min=21208, max=23943, per=34.31%, avg=22575.50, stdev=1933.94, samples=2 00:10:18.351 iops : min= 5302, max= 5985, avg=5643.50, stdev=482.95, samples=2 00:10:18.351 lat (usec) : 250=0.01% 00:10:18.351 lat (msec) : 4=0.37%, 10=7.47%, 20=92.15% 00:10:18.351 cpu : usr=4.30%, sys=15.08%, ctx=473, majf=0, minf=10 00:10:18.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:18.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.351 issued rwts: total=5632,5746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.351 job2: (groupid=0, jobs=1): err= 0: pid=63792: Sun Nov 17 09:00:54 2024 00:10:18.351 read: IOPS=2808, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec) 00:10:18.351 slat (usec): min=6, max=10425, avg=188.62, stdev=1018.21 00:10:18.351 clat (usec): min=328, max=42580, avg=24317.92, stdev=7734.00 00:10:18.351 lat (usec): min=4774, max=42595, avg=24506.54, stdev=7719.32 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[ 5276], 5.00th=[16450], 10.00th=[17433], 20.00th=[17957], 00:10:18.351 | 30.00th=[18744], 40.00th=[19006], 50.00th=[24249], 60.00th=[25822], 00:10:18.351 | 70.00th=[26608], 80.00th=[29754], 90.00th=[36963], 95.00th=[41681], 00:10:18.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:18.351 | 99.99th=[42730] 00:10:18.351 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:18.351 slat (usec): min=10, max=10121, avg=145.01, stdev=715.79 00:10:18.351 clat (usec): min=11641, max=30796, avg=18608.47, stdev=3912.73 00:10:18.351 lat (usec): min=14392, max=30823, avg=18753.48, stdev=3881.05 00:10:18.351 clat percentiles (usec): 00:10:18.351 | 1.00th=[12518], 5.00th=[14746], 10.00th=[14877], 20.00th=[15139], 00:10:18.351 | 30.00th=[15401], 40.00th=[16319], 50.00th=[17433], 60.00th=[19530], 00:10:18.351 | 70.00th=[20317], 80.00th=[21103], 90.00th=[25035], 95.00th=[27657], 00:10:18.351 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:10:18.351 | 99.99th=[30802] 00:10:18.351 bw ( KiB/s): min=10036, max=14560, per=18.69%, avg=12298.00, stdev=3198.95, samples=2 00:10:18.351 iops : min= 2509, max= 3640, avg=3074.50, stdev=799.74, samples=2 00:10:18.351 lat (usec) : 500=0.02% 00:10:18.352 lat (msec) : 10=0.54%, 20=54.44%, 50=45.00% 00:10:18.352 cpu : usr=2.40%, sys=9.18%, ctx=185, majf=0, minf=15 00:10:18.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:18.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.352 issued rwts: total=2817,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.352 job3: (groupid=0, jobs=1): err= 0: pid=63793: Sun Nov 17 09:00:54 2024 00:10:18.352 read: IOPS=4971, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:10:18.352 slat (usec): min=5, max=5758, avg=96.68, stdev=478.85 00:10:18.352 clat (usec): min=649, max=18867, avg=12443.96, stdev=1680.89 00:10:18.352 lat (usec): min=3987, max=19030, avg=12540.63, stdev=1700.86 00:10:18.352 clat percentiles (usec): 00:10:18.352 | 1.00th=[ 6259], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11338], 00:10:18.352 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:10:18.352 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14091], 95.00th=[14746], 00:10:18.352 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:10:18.352 | 99.99th=[18744] 00:10:18.352 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:18.352 slat (usec): min=11, max=5322, avg=93.54, stdev=504.50 00:10:18.352 clat (usec): min=7100, max=19290, avg=12650.46, stdev=1225.52 00:10:18.352 lat (usec): min=7122, max=19308, avg=12744.00, stdev=1310.05 00:10:18.352 clat percentiles (usec): 00:10:18.352 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:10:18.352 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:10:18.352 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14615], 00:10:18.352 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[19268], 00:10:18.352 | 99.99th=[19268] 00:10:18.352 bw ( KiB/s): min=20480, max=20521, per=31.16%, avg=20500.50, stdev=28.99, samples=2 00:10:18.352 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:18.352 lat (usec) : 750=0.01% 00:10:18.352 lat (msec) : 4=0.01%, 10=4.02%, 20=95.96% 00:10:18.352 cpu : usr=4.49%, sys=14.27%, ctx=418, majf=0, minf=8 00:10:18.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:18.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.352 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.352 00:10:18.352 Run status group 0 (all jobs): 00:10:18.352 READ: bw=61.6MiB/s (64.6MB/s), 9507KiB/s-22.0MiB/s (9736kB/s-23.0MB/s), io=61.8MiB (64.8MB), run=1002-1003msec 00:10:18.352 WRITE: bw=64.3MiB/s (67.4MB/s), 9.97MiB/s-22.4MiB/s (10.5MB/s-23.5MB/s), io=64.4MiB (67.6MB), run=1002-1003msec 00:10:18.352 00:10:18.352 Disk stats (read/write): 00:10:18.352 nvme0n1: ios=2098/2319, merge=0/0, ticks=15716/18305, in_queue=34021, util=88.29% 00:10:18.352 nvme0n2: ios=4657/5103, merge=0/0, ticks=16208/16009, in_queue=32217, util=88.60% 00:10:18.352 nvme0n3: ios=2336/2560, merge=0/0, ticks=14219/10915, in_queue=25134, util=88.91% 00:10:18.352 nvme0n4: ios=4096/4506, merge=0/0, ticks=24730/24001, in_queue=48731, util=89.68% 00:10:18.352 09:00:54 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:18.352 [global] 00:10:18.352 thread=1 00:10:18.352 invalidate=1 00:10:18.352 rw=randwrite 00:10:18.352 time_based=1 00:10:18.352 runtime=1 00:10:18.352 ioengine=libaio 00:10:18.352 direct=1 00:10:18.352 bs=4096 00:10:18.352 iodepth=128 00:10:18.352 norandommap=0 00:10:18.352 numjobs=1 00:10:18.352 00:10:18.352 verify_dump=1 00:10:18.352 verify_backlog=512 00:10:18.352 verify_state_save=0 00:10:18.352 do_verify=1 00:10:18.352 verify=crc32c-intel 00:10:18.352 [job0] 00:10:18.352 filename=/dev/nvme0n1 00:10:18.352 [job1] 00:10:18.352 filename=/dev/nvme0n2 00:10:18.352 [job2] 00:10:18.352 filename=/dev/nvme0n3 00:10:18.352 [job3] 00:10:18.352 filename=/dev/nvme0n4 00:10:18.352 Could not set queue depth (nvme0n1) 00:10:18.352 Could not set queue depth (nvme0n2) 00:10:18.352 Could not set queue depth (nvme0n3) 00:10:18.352 Could not set queue depth (nvme0n4) 00:10:18.352 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.352 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.352 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.352 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.352 fio-3.35 00:10:18.352 Starting 4 threads 00:10:19.730 00:10:19.730 job0: (groupid=0, jobs=1): err= 0: pid=63846: Sun Nov 17 09:00:56 2024 00:10:19.730 read: IOPS=4050, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1004msec) 00:10:19.730 slat (usec): min=7, max=9036, avg=114.46, stdev=580.30 00:10:19.730 clat (usec): min=3043, max=36082, avg=15188.69, stdev=5888.01 00:10:19.730 lat (usec): min=3051, max=36102, avg=15303.15, stdev=5931.94 00:10:19.730 clat percentiles (usec): 00:10:19.730 | 1.00th=[ 3490], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11469], 00:10:19.730 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:10:19.730 | 70.00th=[14091], 80.00th=[22676], 90.00th=[25035], 95.00th=[26870], 00:10:19.730 | 99.00th=[31327], 99.50th=[32900], 99.90th=[34341], 99.95th=[35914], 00:10:19.730 | 99.99th=[35914] 00:10:19.730 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:19.730 slat (usec): min=11, max=9284, avg=122.64, stdev=659.37 00:10:19.730 clat (usec): min=6973, max=31040, avg=15532.94, stdev=4909.61 00:10:19.730 lat (usec): min=6995, max=34019, avg=15655.57, stdev=4970.61 00:10:19.730 clat percentiles (usec): 00:10:19.730 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11600], 20.00th=[11863], 00:10:19.730 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:10:19.730 | 70.00th=[17695], 80.00th=[22414], 90.00th=[23725], 95.00th=[23725], 00:10:19.730 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28967], 99.95th=[28967], 00:10:19.730 | 99.99th=[31065] 00:10:19.730 bw ( KiB/s): min=12288, max=20480, per=21.91%, avg=16384.00, stdev=5792.62, samples=2 00:10:19.730 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:10:19.730 lat (msec) : 4=0.53%, 10=3.26%, 20=70.80%, 50=25.42% 00:10:19.730 cpu : usr=3.59%, sys=11.57%, ctx=419, majf=0, minf=15 00:10:19.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.730 issued rwts: total=4067,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.730 job1: (groupid=0, jobs=1): err= 0: pid=63847: Sun Nov 17 09:00:56 2024 00:10:19.730 read: IOPS=5597, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1005msec) 00:10:19.730 slat (usec): min=7, max=5155, avg=82.61, stdev=511.81 00:10:19.730 clat (usec): min=1060, max=21164, avg=11408.92, stdev=1573.37 00:10:19.730 lat (usec): min=4971, max=24551, avg=11491.53, stdev=1590.59 00:10:19.730 clat percentiles (usec): 00:10:19.730 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10552], 00:10:19.730 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11600], 60.00th=[11863], 00:10:19.730 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[12780], 00:10:19.730 | 99.00th=[17433], 99.50th=[19268], 99.90th=[21103], 99.95th=[21103], 00:10:19.730 | 99.99th=[21103] 00:10:19.730 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:19.730 slat (usec): min=7, max=8192, avg=88.27, stdev=523.85 00:10:19.730 clat (usec): min=5556, max=17913, avg=11224.07, stdev=1214.51 00:10:19.730 lat (usec): min=7378, max=18139, avg=11312.35, stdev=1127.64 00:10:19.730 clat percentiles (usec): 00:10:19.730 | 1.00th=[ 7439], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:10:19.730 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:10:19.731 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[13173], 00:10:19.731 | 99.00th=[14746], 99.50th=[15270], 99.90th=[17957], 99.95th=[17957], 00:10:19.731 | 99.99th=[17957] 00:10:19.731 bw ( KiB/s): min=20480, max=24478, per=30.06%, avg=22479.00, stdev=2827.01, samples=2 00:10:19.731 iops : min= 5120, max= 6119, avg=5619.50, stdev=706.40, samples=2 00:10:19.731 lat (msec) : 2=0.01%, 10=8.09%, 20=91.83%, 50=0.07% 00:10:19.731 cpu : usr=4.98%, sys=14.24%, ctx=229, majf=0, minf=13 00:10:19.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:19.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.731 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.731 job2: (groupid=0, jobs=1): err= 0: pid=63848: Sun Nov 17 09:00:56 2024 00:10:19.731 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:10:19.731 slat (usec): min=8, max=6443, avg=96.36, stdev=626.02 00:10:19.731 clat (usec): min=7239, max=23384, avg=13322.56, stdev=1776.88 00:10:19.731 lat (usec): min=7255, max=27398, avg=13418.92, stdev=1795.42 00:10:19.731 clat percentiles (usec): 00:10:19.731 | 1.00th=[ 7767], 5.00th=[11469], 10.00th=[11994], 20.00th=[12387], 00:10:19.731 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:10:19.731 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14615], 95.00th=[15008], 00:10:19.731 | 99.00th=[20317], 99.50th=[22676], 99.90th=[23462], 99.95th=[23462], 00:10:19.731 | 99.99th=[23462] 00:10:19.731 write: IOPS=5081, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:19.731 slat (usec): min=11, max=9701, avg=102.63, stdev=647.28 00:10:19.731 clat (usec): min=260, max=20049, avg=12931.95, stdev=1600.27 00:10:19.731 lat (usec): min=8585, max=20308, avg=13034.58, stdev=1506.69 00:10:19.731 clat percentiles (usec): 00:10:19.731 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11207], 20.00th=[11731], 00:10:19.731 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:10:19.731 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14484], 95.00th=[14877], 00:10:19.731 | 99.00th=[17695], 99.50th=[18744], 99.90th=[20055], 99.95th=[20055], 00:10:19.731 | 99.99th=[20055] 00:10:19.731 bw ( KiB/s): min=19432, max=20439, per=26.66%, avg=19935.50, stdev=712.06, samples=2 00:10:19.731 iops : min= 4858, max= 5109, avg=4983.50, stdev=177.48, samples=2 00:10:19.731 lat (usec) : 500=0.01% 00:10:19.731 lat (msec) : 10=3.74%, 20=95.58%, 50=0.67% 00:10:19.731 cpu : usr=3.88%, sys=13.12%, ctx=200, majf=0, minf=9 00:10:19.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:19.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.731 issued rwts: total=4608,5117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.731 job3: (groupid=0, jobs=1): err= 0: pid=63849: Sun Nov 17 09:00:56 2024 00:10:19.731 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:19.731 slat (usec): min=6, max=9366, avg=128.60, stdev=611.00 00:10:19.731 clat (usec): min=10100, max=33899, avg=16159.70, stdev=4418.65 00:10:19.731 lat (usec): min=10123, max=33934, avg=16288.30, stdev=4461.83 00:10:19.731 clat percentiles (usec): 00:10:19.731 | 1.00th=[10945], 5.00th=[11469], 10.00th=[11994], 20.00th=[12387], 00:10:19.731 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14877], 60.00th=[15401], 00:10:19.731 | 70.00th=[16319], 80.00th=[20579], 90.00th=[23200], 95.00th=[24511], 00:10:19.731 | 99.00th=[29230], 99.50th=[29230], 99.90th=[32375], 99.95th=[32375], 00:10:19.731 | 99.99th=[33817] 00:10:19.731 write: IOPS=3958, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1005msec); 0 zone resets 00:10:19.731 slat (usec): min=8, max=9974, avg=127.37, stdev=622.08 00:10:19.731 clat (usec): min=4025, max=32254, avg=17456.67, stdev=4896.66 00:10:19.731 lat (usec): min=4771, max=32271, avg=17584.04, stdev=4910.30 00:10:19.731 clat percentiles (usec): 00:10:19.731 | 1.00th=[ 8717], 5.00th=[12780], 10.00th=[13304], 20.00th=[13698], 00:10:19.731 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14615], 60.00th=[16450], 00:10:19.731 | 70.00th=[21103], 80.00th=[23200], 90.00th=[23725], 95.00th=[25035], 00:10:19.731 | 99.00th=[29492], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:10:19.731 | 99.99th=[32375] 00:10:19.731 bw ( KiB/s): min=12239, max=18520, per=20.57%, avg=15379.50, stdev=4441.34, samples=2 00:10:19.731 iops : min= 3059, max= 4630, avg=3844.50, stdev=1110.86, samples=2 00:10:19.731 lat (msec) : 10=0.66%, 20=72.22%, 50=27.12% 00:10:19.731 cpu : usr=3.88%, sys=10.96%, ctx=455, majf=0, minf=7 00:10:19.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.731 issued rwts: total=3584,3978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.731 00:10:19.731 Run status group 0 (all jobs): 00:10:19.731 READ: bw=69.4MiB/s (72.7MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-22.9MB/s), io=69.9MiB (73.3MB), run=1004-1007msec 00:10:19.731 WRITE: bw=73.0MiB/s (76.6MB/s), 15.5MiB/s-21.9MiB/s (16.2MB/s-23.0MB/s), io=73.5MiB (77.1MB), run=1004-1007msec 00:10:19.731 00:10:19.731 Disk stats (read/write): 00:10:19.731 nvme0n1: ios=3122/3514, merge=0/0, ticks=23832/24611, in_queue=48443, util=86.07% 00:10:19.731 nvme0n2: ios=4657/4992, merge=0/0, ticks=49124/51051, in_queue=100175, util=89.07% 00:10:19.731 nvme0n3: ios=4088/4168, merge=0/0, ticks=50797/49908, in_queue=100705, util=89.12% 00:10:19.731 nvme0n4: ios=3072/3169, merge=0/0, ticks=20092/22896, in_queue=42988, util=89.68% 00:10:19.731 09:00:56 -- target/fio.sh@55 -- # sync 00:10:19.731 09:00:56 -- target/fio.sh@59 -- # fio_pid=63868 00:10:19.731 09:00:56 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:19.731 09:00:56 -- target/fio.sh@61 -- # sleep 3 00:10:19.731 [global] 00:10:19.731 thread=1 00:10:19.731 invalidate=1 00:10:19.731 rw=read 00:10:19.731 time_based=1 00:10:19.731 runtime=10 00:10:19.731 ioengine=libaio 00:10:19.731 direct=1 00:10:19.731 bs=4096 00:10:19.731 iodepth=1 00:10:19.731 norandommap=1 00:10:19.731 numjobs=1 00:10:19.731 00:10:19.731 [job0] 00:10:19.731 filename=/dev/nvme0n1 00:10:19.731 [job1] 00:10:19.731 filename=/dev/nvme0n2 00:10:19.731 [job2] 00:10:19.731 filename=/dev/nvme0n3 00:10:19.731 [job3] 00:10:19.731 filename=/dev/nvme0n4 00:10:19.731 Could not set queue depth (nvme0n1) 00:10:19.731 Could not set queue depth (nvme0n2) 00:10:19.731 Could not set queue depth (nvme0n3) 00:10:19.731 Could not set queue depth (nvme0n4) 00:10:19.731 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.731 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.731 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.731 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.731 fio-3.35 00:10:19.731 Starting 4 threads 00:10:23.014 09:00:59 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:23.014 fio: pid=63911, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:23.014 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=61530112, buflen=4096 00:10:23.014 09:00:59 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:23.014 fio: pid=63910, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:23.014 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46157824, buflen=4096 00:10:23.014 09:00:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.014 09:00:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:23.273 fio: pid=63908, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:23.273 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8339456, buflen=4096 00:10:23.273 09:01:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.273 09:01:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:23.532 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56492032, buflen=4096 00:10:23.532 fio: pid=63909, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:23.532 09:01:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.532 09:01:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:23.532 00:10:23.532 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63908: Sun Nov 17 09:01:00 2024 00:10:23.532 read: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(72.0MiB/3416msec) 00:10:23.532 slat (usec): min=8, max=9917, avg=15.01, stdev=115.26 00:10:23.532 clat (usec): min=124, max=2596, avg=169.06, stdev=42.78 00:10:23.532 lat (usec): min=136, max=10486, avg=184.07, stdev=125.73 00:10:23.532 clat percentiles (usec): 00:10:23.532 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:23.532 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:23.532 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 196], 95.00th=[ 219], 00:10:23.532 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 396], 99.95th=[ 611], 00:10:23.532 | 99.99th=[ 2474] 00:10:23.532 bw ( KiB/s): min=21496, max=22936, per=35.33%, avg=22485.33, stdev=596.67, samples=6 00:10:23.532 iops : min= 5374, max= 5734, avg=5621.33, stdev=149.17, samples=6 00:10:23.532 lat (usec) : 250=96.53%, 500=3.40%, 750=0.03%, 1000=0.01% 00:10:23.532 lat (msec) : 2=0.01%, 4=0.01% 00:10:23.532 cpu : usr=1.46%, sys=6.71%, ctx=18429, majf=0, minf=1 00:10:23.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.532 issued rwts: total=18421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.532 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63909: Sun Nov 17 09:01:00 2024 00:10:23.532 read: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(53.9MiB/3677msec) 00:10:23.532 slat (usec): min=8, max=11207, avg=17.61, stdev=177.16 00:10:23.532 clat (usec): min=50, max=2679, avg=247.57, stdev=54.92 00:10:23.532 lat (usec): min=135, max=11482, avg=265.18, stdev=186.59 00:10:23.532 clat percentiles (usec): 00:10:23.532 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 167], 20.00th=[ 229], 00:10:23.533 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:10:23.533 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 318], 00:10:23.533 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 465], 99.95th=[ 791], 00:10:23.533 | 99.99th=[ 1713] 00:10:23.533 bw ( KiB/s): min=12896, max=17174, per=23.27%, avg=14812.29, stdev=1247.73, samples=7 00:10:23.533 iops : min= 3224, max= 4293, avg=3703.00, stdev=311.77, samples=7 00:10:23.533 lat (usec) : 100=0.01%, 250=44.45%, 500=55.46%, 750=0.03%, 1000=0.01% 00:10:23.533 lat (msec) : 2=0.03%, 4=0.01% 00:10:23.533 cpu : usr=1.09%, sys=4.52%, ctx=13807, majf=0, minf=2 00:10:23.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 issued rwts: total=13793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.533 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63910: Sun Nov 17 09:01:00 2024 00:10:23.533 read: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(44.0MiB/3169msec) 00:10:23.533 slat (usec): min=8, max=7695, avg=15.52, stdev=101.61 00:10:23.533 clat (usec): min=148, max=3634, avg=264.43, stdev=56.24 00:10:23.533 lat (usec): min=161, max=8013, avg=279.95, stdev=117.02 00:10:23.533 clat percentiles (usec): 00:10:23.533 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:10:23.533 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:10:23.533 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 326], 00:10:23.533 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 465], 99.95th=[ 816], 00:10:23.533 | 99.99th=[ 2835] 00:10:23.533 bw ( KiB/s): min=12736, max=14840, per=22.53%, avg=14338.67, stdev=825.76, samples=6 00:10:23.533 iops : min= 3184, max= 3710, avg=3584.67, stdev=206.44, samples=6 00:10:23.533 lat (usec) : 250=31.64%, 500=68.27%, 750=0.03%, 1000=0.01% 00:10:23.533 lat (msec) : 2=0.02%, 4=0.03% 00:10:23.533 cpu : usr=1.14%, sys=4.26%, ctx=11282, majf=0, minf=1 00:10:23.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 issued rwts: total=11270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.533 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63911: Sun Nov 17 09:01:00 2024 00:10:23.533 read: IOPS=5132, BW=20.0MiB/s (21.0MB/s)(58.7MiB/2927msec) 00:10:23.533 slat (nsec): min=9254, max=81414, avg=13269.02, stdev=2171.55 00:10:23.533 clat (usec): min=136, max=8131, avg=180.16, stdev=76.76 00:10:23.533 lat (usec): min=148, max=8153, avg=193.43, stdev=76.81 00:10:23.533 clat percentiles (usec): 00:10:23.533 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:10:23.533 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:10:23.533 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 219], 00:10:23.533 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 519], 00:10:23.533 | 99.99th=[ 1795] 00:10:23.533 bw ( KiB/s): min=20656, max=21632, per=33.51%, avg=21329.60, stdev=403.96, samples=5 00:10:23.533 iops : min= 5164, max= 5408, avg=5332.40, stdev=100.99, samples=5 00:10:23.533 lat (usec) : 250=95.92%, 500=4.02%, 750=0.03%, 1000=0.01% 00:10:23.533 lat (msec) : 2=0.01%, 10=0.01% 00:10:23.533 cpu : usr=1.23%, sys=6.49%, ctx=15025, majf=0, minf=1 00:10:23.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.533 issued rwts: total=15023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.533 00:10:23.533 Run status group 0 (all jobs): 00:10:23.533 READ: bw=62.2MiB/s (65.2MB/s), 13.9MiB/s-21.1MiB/s (14.6MB/s-22.1MB/s), io=229MiB (240MB), run=2927-3677msec 00:10:23.533 00:10:23.533 Disk stats (read/write): 00:10:23.533 nvme0n1: ios=18219/0, merge=0/0, ticks=3070/0, in_queue=3070, util=95.54% 00:10:23.533 nvme0n2: ios=13446/0, merge=0/0, ticks=3385/0, in_queue=3385, util=95.56% 00:10:23.533 nvme0n3: ios=11107/0, merge=0/0, ticks=2952/0, in_queue=2952, util=96.40% 00:10:23.533 nvme0n4: ios=14866/0, merge=0/0, ticks=2667/0, in_queue=2667, util=96.66% 00:10:23.792 09:01:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.792 09:01:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:24.051 09:01:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.051 09:01:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:24.310 09:01:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.310 09:01:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:24.569 09:01:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.569 09:01:01 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:24.828 09:01:01 -- target/fio.sh@69 -- # fio_status=0 00:10:24.828 09:01:01 -- target/fio.sh@70 -- # wait 63868 00:10:24.828 09:01:01 -- target/fio.sh@70 -- # fio_status=4 00:10:24.828 09:01:01 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.828 09:01:01 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.828 09:01:01 -- common/autotest_common.sh@1208 -- # local i=0 00:10:24.828 09:01:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:24.828 09:01:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.828 09:01:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:24.828 09:01:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.828 nvmf hotplug test: fio failed as expected 00:10:24.828 09:01:01 -- common/autotest_common.sh@1220 -- # return 0 00:10:24.828 09:01:01 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:24.828 09:01:01 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:24.828 09:01:01 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.086 09:01:01 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:25.086 09:01:01 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:25.087 09:01:01 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:25.087 09:01:01 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:25.087 09:01:01 -- target/fio.sh@91 -- # nvmftestfini 00:10:25.087 09:01:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:25.087 09:01:01 -- nvmf/common.sh@116 -- # sync 00:10:25.087 09:01:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:25.087 09:01:01 -- nvmf/common.sh@119 -- # set +e 00:10:25.087 09:01:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:25.087 09:01:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:25.087 rmmod nvme_tcp 00:10:25.087 rmmod nvme_fabrics 00:10:25.087 rmmod nvme_keyring 00:10:25.087 09:01:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:25.087 09:01:02 -- nvmf/common.sh@123 -- # set -e 00:10:25.087 09:01:02 -- nvmf/common.sh@124 -- # return 0 00:10:25.087 09:01:02 -- nvmf/common.sh@477 -- # '[' -n 63480 ']' 00:10:25.087 09:01:02 -- nvmf/common.sh@478 -- # killprocess 63480 00:10:25.087 09:01:02 -- common/autotest_common.sh@936 -- # '[' -z 63480 ']' 00:10:25.087 09:01:02 -- common/autotest_common.sh@940 -- # kill -0 63480 00:10:25.346 09:01:02 -- common/autotest_common.sh@941 -- # uname 00:10:25.346 09:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.346 09:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63480 00:10:25.346 killing process with pid 63480 00:10:25.346 09:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:25.346 09:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:25.346 09:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63480' 00:10:25.346 09:01:02 -- common/autotest_common.sh@955 -- # kill 63480 00:10:25.346 09:01:02 -- common/autotest_common.sh@960 -- # wait 63480 00:10:25.346 09:01:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:25.346 09:01:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:25.346 09:01:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:25.346 09:01:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.346 09:01:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:25.346 09:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.346 09:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.346 09:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.346 09:01:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:25.346 00:10:25.346 real 0m19.414s 00:10:25.346 user 1m13.056s 00:10:25.346 sys 0m10.280s 00:10:25.346 09:01:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.346 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.346 ************************************ 00:10:25.346 END TEST nvmf_fio_target 00:10:25.346 ************************************ 00:10:25.606 09:01:02 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:25.606 09:01:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.606 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.606 ************************************ 00:10:25.606 START TEST nvmf_bdevio 00:10:25.606 ************************************ 00:10:25.606 09:01:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:25.606 * Looking for test storage... 00:10:25.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.606 09:01:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:25.606 09:01:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:25.606 09:01:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:25.606 09:01:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:25.606 09:01:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:25.606 09:01:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:25.606 09:01:02 -- scripts/common.sh@335 -- # IFS=.-: 00:10:25.606 09:01:02 -- scripts/common.sh@335 -- # read -ra ver1 00:10:25.606 09:01:02 -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.606 09:01:02 -- scripts/common.sh@336 -- # read -ra ver2 00:10:25.606 09:01:02 -- scripts/common.sh@337 -- # local 'op=<' 00:10:25.606 09:01:02 -- scripts/common.sh@339 -- # ver1_l=2 00:10:25.606 09:01:02 -- scripts/common.sh@340 -- # ver2_l=1 00:10:25.606 09:01:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:25.606 09:01:02 -- scripts/common.sh@343 -- # case "$op" in 00:10:25.606 09:01:02 -- scripts/common.sh@344 -- # : 1 00:10:25.606 09:01:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:25.606 09:01:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.606 09:01:02 -- scripts/common.sh@364 -- # decimal 1 00:10:25.606 09:01:02 -- scripts/common.sh@352 -- # local d=1 00:10:25.606 09:01:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.606 09:01:02 -- scripts/common.sh@354 -- # echo 1 00:10:25.606 09:01:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:25.606 09:01:02 -- scripts/common.sh@365 -- # decimal 2 00:10:25.606 09:01:02 -- scripts/common.sh@352 -- # local d=2 00:10:25.606 09:01:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.606 09:01:02 -- scripts/common.sh@354 -- # echo 2 00:10:25.606 09:01:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:25.606 09:01:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:25.606 09:01:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:25.606 09:01:02 -- scripts/common.sh@367 -- # return 0 00:10:25.606 09:01:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.606 --rc genhtml_branch_coverage=1 00:10:25.606 --rc genhtml_function_coverage=1 00:10:25.606 --rc genhtml_legend=1 00:10:25.606 --rc geninfo_all_blocks=1 00:10:25.606 --rc geninfo_unexecuted_blocks=1 00:10:25.606 00:10:25.606 ' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.606 --rc genhtml_branch_coverage=1 00:10:25.606 --rc genhtml_function_coverage=1 00:10:25.606 --rc genhtml_legend=1 00:10:25.606 --rc geninfo_all_blocks=1 00:10:25.606 --rc geninfo_unexecuted_blocks=1 00:10:25.606 00:10:25.606 ' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.606 --rc genhtml_branch_coverage=1 00:10:25.606 --rc genhtml_function_coverage=1 00:10:25.606 --rc genhtml_legend=1 00:10:25.606 --rc geninfo_all_blocks=1 00:10:25.606 --rc geninfo_unexecuted_blocks=1 00:10:25.606 00:10:25.606 ' 00:10:25.606 09:01:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.606 --rc genhtml_branch_coverage=1 00:10:25.606 --rc genhtml_function_coverage=1 00:10:25.606 --rc genhtml_legend=1 00:10:25.606 --rc geninfo_all_blocks=1 00:10:25.606 --rc geninfo_unexecuted_blocks=1 00:10:25.606 00:10:25.606 ' 00:10:25.606 09:01:02 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.606 09:01:02 -- nvmf/common.sh@7 -- # uname -s 00:10:25.606 09:01:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.606 09:01:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.606 09:01:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.606 09:01:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.606 09:01:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.606 09:01:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.606 09:01:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.606 09:01:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.606 09:01:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.606 09:01:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.606 09:01:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:25.606 09:01:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:25.606 09:01:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.606 09:01:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.606 09:01:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.606 09:01:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.606 09:01:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.606 09:01:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.606 09:01:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.606 09:01:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.606 09:01:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.606 09:01:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.606 09:01:02 -- paths/export.sh@5 -- # export PATH 00:10:25.607 09:01:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.607 09:01:02 -- nvmf/common.sh@46 -- # : 0 00:10:25.607 09:01:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:25.607 09:01:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:25.607 09:01:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:25.607 09:01:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.607 09:01:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.607 09:01:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:25.607 09:01:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:25.607 09:01:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:25.607 09:01:02 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.607 09:01:02 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.607 09:01:02 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:25.607 09:01:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:25.607 09:01:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.607 09:01:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:25.607 09:01:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:25.607 09:01:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:25.607 09:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.607 09:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.607 09:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.607 09:01:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:25.607 09:01:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:25.607 09:01:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:25.607 09:01:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:25.607 09:01:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:25.607 09:01:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:25.607 09:01:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.607 09:01:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.607 09:01:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.607 09:01:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:25.607 09:01:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.607 09:01:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.607 09:01:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.607 09:01:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.607 09:01:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.607 09:01:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.607 09:01:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.607 09:01:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.607 09:01:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:25.866 09:01:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:25.866 Cannot find device "nvmf_tgt_br" 00:10:25.866 09:01:02 -- nvmf/common.sh@154 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.866 Cannot find device "nvmf_tgt_br2" 00:10:25.866 09:01:02 -- nvmf/common.sh@155 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:25.866 09:01:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:25.866 Cannot find device "nvmf_tgt_br" 00:10:25.866 09:01:02 -- nvmf/common.sh@157 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:25.866 Cannot find device "nvmf_tgt_br2" 00:10:25.866 09:01:02 -- nvmf/common.sh@158 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:25.866 09:01:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:25.866 09:01:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.866 09:01:02 -- nvmf/common.sh@161 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.866 09:01:02 -- nvmf/common.sh@162 -- # true 00:10:25.866 09:01:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.866 09:01:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.866 09:01:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.866 09:01:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.866 09:01:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.866 09:01:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.866 09:01:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.866 09:01:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.867 09:01:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.867 09:01:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:25.867 09:01:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:25.867 09:01:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:25.867 09:01:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:25.867 09:01:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.867 09:01:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.867 09:01:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.867 09:01:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:25.867 09:01:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:25.867 09:01:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.867 09:01:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.867 09:01:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.127 09:01:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.127 09:01:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.127 09:01:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:26.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:26.127 00:10:26.127 --- 10.0.0.2 ping statistics --- 00:10:26.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.127 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:26.127 09:01:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:26.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:26.127 00:10:26.127 --- 10.0.0.3 ping statistics --- 00:10:26.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.127 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:26.127 09:01:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:26.127 00:10:26.127 --- 10.0.0.1 ping statistics --- 00:10:26.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.127 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:26.127 09:01:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.127 09:01:02 -- nvmf/common.sh@421 -- # return 0 00:10:26.127 09:01:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:26.127 09:01:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.127 09:01:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:26.127 09:01:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:26.127 09:01:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.127 09:01:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:26.127 09:01:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:26.127 09:01:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:26.127 09:01:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:26.127 09:01:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.127 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:26.127 09:01:02 -- nvmf/common.sh@469 -- # nvmfpid=64182 00:10:26.127 09:01:02 -- nvmf/common.sh@470 -- # waitforlisten 64182 00:10:26.127 09:01:02 -- common/autotest_common.sh@829 -- # '[' -z 64182 ']' 00:10:26.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.127 09:01:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.127 09:01:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:26.127 09:01:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.127 09:01:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.127 09:01:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.127 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:26.127 [2024-11-17 09:01:02.910502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:26.127 [2024-11-17 09:01:02.910611] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.127 [2024-11-17 09:01:03.051759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.386 [2024-11-17 09:01:03.102096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.386 [2024-11-17 09:01:03.102245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.386 [2024-11-17 09:01:03.102258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.386 [2024-11-17 09:01:03.102265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.386 [2024-11-17 09:01:03.102421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:26.386 [2024-11-17 09:01:03.102888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:26.386 [2024-11-17 09:01:03.103100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:26.386 [2024-11-17 09:01:03.103369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.324 09:01:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.324 09:01:03 -- common/autotest_common.sh@862 -- # return 0 00:10:27.324 09:01:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:27.324 09:01:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.324 09:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 09:01:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.324 09:01:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.324 09:01:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.324 09:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 [2024-11-17 09:01:03.982390] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.324 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.324 09:01:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.324 09:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.324 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 Malloc0 00:10:27.324 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.324 09:01:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.324 09:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.324 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.324 09:01:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.324 09:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.324 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.324 09:01:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.324 09:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.324 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.324 [2024-11-17 09:01:04.056247] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.324 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.324 09:01:04 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:27.324 09:01:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:27.324 09:01:04 -- nvmf/common.sh@520 -- # config=() 00:10:27.324 09:01:04 -- nvmf/common.sh@520 -- # local subsystem config 00:10:27.324 09:01:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:27.324 09:01:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:27.324 { 00:10:27.324 "params": { 00:10:27.324 "name": "Nvme$subsystem", 00:10:27.324 "trtype": "$TEST_TRANSPORT", 00:10:27.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.324 "adrfam": "ipv4", 00:10:27.324 "trsvcid": "$NVMF_PORT", 00:10:27.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.324 "hdgst": ${hdgst:-false}, 00:10:27.324 "ddgst": ${ddgst:-false} 00:10:27.324 }, 00:10:27.324 "method": "bdev_nvme_attach_controller" 00:10:27.324 } 00:10:27.324 EOF 00:10:27.324 )") 00:10:27.324 09:01:04 -- nvmf/common.sh@542 -- # cat 00:10:27.324 09:01:04 -- nvmf/common.sh@544 -- # jq . 00:10:27.324 09:01:04 -- nvmf/common.sh@545 -- # IFS=, 00:10:27.324 09:01:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:27.324 "params": { 00:10:27.324 "name": "Nvme1", 00:10:27.324 "trtype": "tcp", 00:10:27.324 "traddr": "10.0.0.2", 00:10:27.324 "adrfam": "ipv4", 00:10:27.324 "trsvcid": "4420", 00:10:27.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.324 "hdgst": false, 00:10:27.324 "ddgst": false 00:10:27.324 }, 00:10:27.324 "method": "bdev_nvme_attach_controller" 00:10:27.324 }' 00:10:27.324 [2024-11-17 09:01:04.113723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.324 [2024-11-17 09:01:04.114000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:10:27.584 [2024-11-17 09:01:04.251905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.584 [2024-11-17 09:01:04.310704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.584 [2024-11-17 09:01:04.310849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.584 [2024-11-17 09:01:04.310854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.584 [2024-11-17 09:01:04.443227] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:27.584 [2024-11-17 09:01:04.443800] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:27.584 I/O targets: 00:10:27.584 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:27.584 00:10:27.584 00:10:27.584 CUnit - A unit testing framework for C - Version 2.1-3 00:10:27.584 http://cunit.sourceforge.net/ 00:10:27.584 00:10:27.584 00:10:27.584 Suite: bdevio tests on: Nvme1n1 00:10:27.584 Test: blockdev write read block ...passed 00:10:27.584 Test: blockdev write zeroes read block ...passed 00:10:27.584 Test: blockdev write zeroes read no split ...passed 00:10:27.584 Test: blockdev write zeroes read split ...passed 00:10:27.584 Test: blockdev write zeroes read split partial ...passed 00:10:27.584 Test: blockdev reset ...[2024-11-17 09:01:04.476770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:27.584 [2024-11-17 09:01:04.477024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x525c80 (9): Bad file descriptor 00:10:27.584 [2024-11-17 09:01:04.492655] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:27.584 passed 00:10:27.584 Test: blockdev write read 8 blocks ...passed 00:10:27.584 Test: blockdev write read size > 128k ...passed 00:10:27.584 Test: blockdev write read invalid size ...passed 00:10:27.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:27.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:27.584 Test: blockdev write read max offset ...passed 00:10:27.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:27.584 Test: blockdev writev readv 8 blocks ...passed 00:10:27.584 Test: blockdev writev readv 30 x 1block ...passed 00:10:27.584 Test: blockdev writev readv block ...passed 00:10:27.584 Test: blockdev writev readv size > 128k ...passed 00:10:27.584 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:27.584 Test: blockdev comparev and writev ...[2024-11-17 09:01:04.502736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.502975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.503146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.503281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.503789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.503955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.504105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.504247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.504758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.504872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.505003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.505133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.505612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.505721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.505849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.584 [2024-11-17 09:01:04.505995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:27.584 passed 00:10:27.584 Test: blockdev nvme passthru rw ...passed 00:10:27.584 Test: blockdev nvme passthru vendor specific ...[2024-11-17 09:01:04.506936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.584 [2024-11-17 09:01:04.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.507428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.584 [2024-11-17 09:01:04.507577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.507889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.584 [2024-11-17 09:01:04.508019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:27.584 [2024-11-17 09:01:04.508316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.584 [2024-11-17 09:01:04.508447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:27.584 passed 00:10:27.843 Test: blockdev nvme admin passthru ...passed 00:10:27.843 Test: blockdev copy ...passed 00:10:27.843 00:10:27.843 Run Summary: Type Total Ran Passed Failed Inactive 00:10:27.843 suites 1 1 n/a 0 0 00:10:27.843 tests 23 23 23 0 0 00:10:27.843 asserts 152 152 152 0 n/a 00:10:27.843 00:10:27.843 Elapsed time = 0.166 seconds 00:10:27.843 09:01:04 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.843 09:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.843 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.843 09:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.843 09:01:04 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:27.843 09:01:04 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:27.843 09:01:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:27.843 09:01:04 -- nvmf/common.sh@116 -- # sync 00:10:27.843 09:01:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:27.843 09:01:04 -- nvmf/common.sh@119 -- # set +e 00:10:27.843 09:01:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:27.843 09:01:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:27.843 rmmod nvme_tcp 00:10:27.843 rmmod nvme_fabrics 00:10:28.102 rmmod nvme_keyring 00:10:28.102 09:01:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:28.102 09:01:04 -- nvmf/common.sh@123 -- # set -e 00:10:28.102 09:01:04 -- nvmf/common.sh@124 -- # return 0 00:10:28.102 09:01:04 -- nvmf/common.sh@477 -- # '[' -n 64182 ']' 00:10:28.102 09:01:04 -- nvmf/common.sh@478 -- # killprocess 64182 00:10:28.102 09:01:04 -- common/autotest_common.sh@936 -- # '[' -z 64182 ']' 00:10:28.102 09:01:04 -- common/autotest_common.sh@940 -- # kill -0 64182 00:10:28.102 09:01:04 -- common/autotest_common.sh@941 -- # uname 00:10:28.102 09:01:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.102 09:01:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64182 00:10:28.102 09:01:04 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:28.103 09:01:04 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:28.103 killing process with pid 64182 00:10:28.103 09:01:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64182' 00:10:28.103 09:01:04 -- common/autotest_common.sh@955 -- # kill 64182 00:10:28.103 09:01:04 -- common/autotest_common.sh@960 -- # wait 64182 00:10:28.103 09:01:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:28.103 09:01:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:28.103 09:01:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:28.103 09:01:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.103 09:01:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:28.103 09:01:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.103 09:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.103 09:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.362 09:01:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:28.362 00:10:28.362 real 0m2.733s 00:10:28.362 user 0m8.864s 00:10:28.362 sys 0m0.653s 00:10:28.362 09:01:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:28.362 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.362 ************************************ 00:10:28.362 END TEST nvmf_bdevio 00:10:28.362 ************************************ 00:10:28.362 09:01:05 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:28.362 09:01:05 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:28.362 09:01:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:28.362 09:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.362 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.362 ************************************ 00:10:28.362 START TEST nvmf_bdevio_no_huge 00:10:28.362 ************************************ 00:10:28.362 09:01:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:28.362 * Looking for test storage... 00:10:28.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.362 09:01:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:28.362 09:01:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:28.362 09:01:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:28.362 09:01:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:28.362 09:01:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:28.362 09:01:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:28.362 09:01:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:28.362 09:01:05 -- scripts/common.sh@335 -- # IFS=.-: 00:10:28.362 09:01:05 -- scripts/common.sh@335 -- # read -ra ver1 00:10:28.362 09:01:05 -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.362 09:01:05 -- scripts/common.sh@336 -- # read -ra ver2 00:10:28.362 09:01:05 -- scripts/common.sh@337 -- # local 'op=<' 00:10:28.362 09:01:05 -- scripts/common.sh@339 -- # ver1_l=2 00:10:28.362 09:01:05 -- scripts/common.sh@340 -- # ver2_l=1 00:10:28.362 09:01:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:28.362 09:01:05 -- scripts/common.sh@343 -- # case "$op" in 00:10:28.362 09:01:05 -- scripts/common.sh@344 -- # : 1 00:10:28.362 09:01:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:28.362 09:01:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.362 09:01:05 -- scripts/common.sh@364 -- # decimal 1 00:10:28.362 09:01:05 -- scripts/common.sh@352 -- # local d=1 00:10:28.362 09:01:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.362 09:01:05 -- scripts/common.sh@354 -- # echo 1 00:10:28.362 09:01:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:28.362 09:01:05 -- scripts/common.sh@365 -- # decimal 2 00:10:28.362 09:01:05 -- scripts/common.sh@352 -- # local d=2 00:10:28.362 09:01:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.362 09:01:05 -- scripts/common.sh@354 -- # echo 2 00:10:28.362 09:01:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:28.362 09:01:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:28.362 09:01:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:28.362 09:01:05 -- scripts/common.sh@367 -- # return 0 00:10:28.362 09:01:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.362 09:01:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:28.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.362 --rc genhtml_branch_coverage=1 00:10:28.362 --rc genhtml_function_coverage=1 00:10:28.362 --rc genhtml_legend=1 00:10:28.362 --rc geninfo_all_blocks=1 00:10:28.362 --rc geninfo_unexecuted_blocks=1 00:10:28.362 00:10:28.362 ' 00:10:28.362 09:01:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:28.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.362 --rc genhtml_branch_coverage=1 00:10:28.362 --rc genhtml_function_coverage=1 00:10:28.362 --rc genhtml_legend=1 00:10:28.362 --rc geninfo_all_blocks=1 00:10:28.362 --rc geninfo_unexecuted_blocks=1 00:10:28.362 00:10:28.362 ' 00:10:28.362 09:01:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.363 --rc genhtml_branch_coverage=1 00:10:28.363 --rc genhtml_function_coverage=1 00:10:28.363 --rc genhtml_legend=1 00:10:28.363 --rc geninfo_all_blocks=1 00:10:28.363 --rc geninfo_unexecuted_blocks=1 00:10:28.363 00:10:28.363 ' 00:10:28.363 09:01:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:28.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.363 --rc genhtml_branch_coverage=1 00:10:28.363 --rc genhtml_function_coverage=1 00:10:28.363 --rc genhtml_legend=1 00:10:28.363 --rc geninfo_all_blocks=1 00:10:28.363 --rc geninfo_unexecuted_blocks=1 00:10:28.363 00:10:28.363 ' 00:10:28.363 09:01:05 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.363 09:01:05 -- nvmf/common.sh@7 -- # uname -s 00:10:28.363 09:01:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.363 09:01:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.363 09:01:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.363 09:01:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.363 09:01:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.363 09:01:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.363 09:01:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.363 09:01:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.363 09:01:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.363 09:01:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.622 09:01:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:28.622 09:01:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:28.622 09:01:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.622 09:01:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.622 09:01:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.622 09:01:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.622 09:01:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.622 09:01:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.622 09:01:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.622 09:01:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.622 09:01:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.622 09:01:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.622 09:01:05 -- paths/export.sh@5 -- # export PATH 00:10:28.622 09:01:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.622 09:01:05 -- nvmf/common.sh@46 -- # : 0 00:10:28.622 09:01:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:28.622 09:01:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:28.622 09:01:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:28.622 09:01:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.622 09:01:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.622 09:01:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:28.622 09:01:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:28.622 09:01:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:28.622 09:01:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.622 09:01:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.622 09:01:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:28.622 09:01:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:28.622 09:01:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.622 09:01:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:28.622 09:01:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:28.622 09:01:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:28.622 09:01:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.622 09:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.622 09:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.622 09:01:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:28.622 09:01:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:28.622 09:01:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:28.622 09:01:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:28.622 09:01:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:28.622 09:01:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:28.622 09:01:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.622 09:01:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.622 09:01:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.622 09:01:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:28.622 09:01:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.622 09:01:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.622 09:01:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.622 09:01:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.622 09:01:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.622 09:01:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.622 09:01:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.622 09:01:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.622 09:01:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:28.622 09:01:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:28.622 Cannot find device "nvmf_tgt_br" 00:10:28.622 09:01:05 -- nvmf/common.sh@154 -- # true 00:10:28.622 09:01:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.622 Cannot find device "nvmf_tgt_br2" 00:10:28.622 09:01:05 -- nvmf/common.sh@155 -- # true 00:10:28.622 09:01:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:28.622 09:01:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:28.622 Cannot find device "nvmf_tgt_br" 00:10:28.622 09:01:05 -- nvmf/common.sh@157 -- # true 00:10:28.622 09:01:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:28.622 Cannot find device "nvmf_tgt_br2" 00:10:28.622 09:01:05 -- nvmf/common.sh@158 -- # true 00:10:28.622 09:01:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:28.622 09:01:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:28.623 09:01:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.623 09:01:05 -- nvmf/common.sh@161 -- # true 00:10:28.623 09:01:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.623 09:01:05 -- nvmf/common.sh@162 -- # true 00:10:28.623 09:01:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.623 09:01:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.623 09:01:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.623 09:01:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.623 09:01:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.623 09:01:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.623 09:01:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.623 09:01:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.623 09:01:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.623 09:01:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:28.623 09:01:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:28.623 09:01:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:28.623 09:01:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:28.623 09:01:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.623 09:01:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.623 09:01:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.623 09:01:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:28.623 09:01:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:28.882 09:01:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.882 09:01:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.882 09:01:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.882 09:01:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.882 09:01:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.882 09:01:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:28.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:28.882 00:10:28.882 --- 10.0.0.2 ping statistics --- 00:10:28.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.882 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:28.882 09:01:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:28.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:28.882 00:10:28.882 --- 10.0.0.3 ping statistics --- 00:10:28.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.882 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:28.882 09:01:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:28.882 00:10:28.882 --- 10.0.0.1 ping statistics --- 00:10:28.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.882 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:28.882 09:01:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.882 09:01:05 -- nvmf/common.sh@421 -- # return 0 00:10:28.882 09:01:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:28.882 09:01:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.882 09:01:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:28.882 09:01:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:28.882 09:01:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.882 09:01:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:28.882 09:01:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:28.882 09:01:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:28.882 09:01:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:28.882 09:01:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.882 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 09:01:05 -- nvmf/common.sh@469 -- # nvmfpid=64404 00:10:28.882 09:01:05 -- nvmf/common.sh@470 -- # waitforlisten 64404 00:10:28.882 09:01:05 -- common/autotest_common.sh@829 -- # '[' -z 64404 ']' 00:10:28.882 09:01:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.882 09:01:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.882 09:01:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:28.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.882 09:01:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.882 09:01:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.882 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 [2024-11-17 09:01:05.701258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.882 [2024-11-17 09:01:05.701384] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:29.141 [2024-11-17 09:01:05.848255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.141 [2024-11-17 09:01:05.944694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.141 [2024-11-17 09:01:05.944885] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.141 [2024-11-17 09:01:05.944898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.141 [2024-11-17 09:01:05.944906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.141 [2024-11-17 09:01:05.945300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:29.141 [2024-11-17 09:01:05.945422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:29.141 [2024-11-17 09:01:05.945575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:29.141 [2024-11-17 09:01:05.945580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.079 09:01:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.079 09:01:06 -- common/autotest_common.sh@862 -- # return 0 00:10:30.079 09:01:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:30.079 09:01:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 09:01:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.079 09:01:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.079 09:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 [2024-11-17 09:01:06.762364] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.079 09:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.079 09:01:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.079 09:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 Malloc0 00:10:30.079 09:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.079 09:01:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.079 09:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 09:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.079 09:01:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.079 09:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 09:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.079 09:01:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.079 09:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.079 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:10:30.079 [2024-11-17 09:01:06.802584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.079 09:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.079 09:01:06 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:30.079 09:01:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:30.079 09:01:06 -- nvmf/common.sh@520 -- # config=() 00:10:30.079 09:01:06 -- nvmf/common.sh@520 -- # local subsystem config 00:10:30.079 09:01:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:30.079 09:01:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:30.079 { 00:10:30.079 "params": { 00:10:30.079 "name": "Nvme$subsystem", 00:10:30.079 "trtype": "$TEST_TRANSPORT", 00:10:30.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.080 "adrfam": "ipv4", 00:10:30.080 "trsvcid": "$NVMF_PORT", 00:10:30.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.080 "hdgst": ${hdgst:-false}, 00:10:30.080 "ddgst": ${ddgst:-false} 00:10:30.080 }, 00:10:30.080 "method": "bdev_nvme_attach_controller" 00:10:30.080 } 00:10:30.080 EOF 00:10:30.080 )") 00:10:30.080 09:01:06 -- nvmf/common.sh@542 -- # cat 00:10:30.080 09:01:06 -- nvmf/common.sh@544 -- # jq . 00:10:30.080 09:01:06 -- nvmf/common.sh@545 -- # IFS=, 00:10:30.080 09:01:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:30.080 "params": { 00:10:30.080 "name": "Nvme1", 00:10:30.080 "trtype": "tcp", 00:10:30.080 "traddr": "10.0.0.2", 00:10:30.080 "adrfam": "ipv4", 00:10:30.080 "trsvcid": "4420", 00:10:30.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.080 "hdgst": false, 00:10:30.080 "ddgst": false 00:10:30.080 }, 00:10:30.080 "method": "bdev_nvme_attach_controller" 00:10:30.080 }' 00:10:30.080 [2024-11-17 09:01:06.870139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.080 [2024-11-17 09:01:06.870268] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64440 ] 00:10:30.339 [2024-11-17 09:01:07.035900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.339 [2024-11-17 09:01:07.169194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.339 [2024-11-17 09:01:07.169297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.339 [2024-11-17 09:01:07.169307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.599 [2024-11-17 09:01:07.336041] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:30.599 [2024-11-17 09:01:07.336092] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:30.599 I/O targets: 00:10:30.599 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:30.599 00:10:30.599 00:10:30.599 CUnit - A unit testing framework for C - Version 2.1-3 00:10:30.599 http://cunit.sourceforge.net/ 00:10:30.599 00:10:30.599 00:10:30.599 Suite: bdevio tests on: Nvme1n1 00:10:30.599 Test: blockdev write read block ...passed 00:10:30.599 Test: blockdev write zeroes read block ...passed 00:10:30.599 Test: blockdev write zeroes read no split ...passed 00:10:30.599 Test: blockdev write zeroes read split ...passed 00:10:30.599 Test: blockdev write zeroes read split partial ...passed 00:10:30.599 Test: blockdev reset ...[2024-11-17 09:01:07.373804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:30.599 [2024-11-17 09:01:07.373912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2138680 (9): Bad file descriptor 00:10:30.599 [2024-11-17 09:01:07.394325] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:30.599 passed 00:10:30.599 Test: blockdev write read 8 blocks ...passed 00:10:30.599 Test: blockdev write read size > 128k ...passed 00:10:30.599 Test: blockdev write read invalid size ...passed 00:10:30.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:30.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:30.599 Test: blockdev write read max offset ...passed 00:10:30.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:30.599 Test: blockdev writev readv 8 blocks ...passed 00:10:30.599 Test: blockdev writev readv 30 x 1block ...passed 00:10:30.599 Test: blockdev writev readv block ...passed 00:10:30.599 Test: blockdev writev readv size > 128k ...passed 00:10:30.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:30.599 Test: blockdev comparev and writev ...[2024-11-17 09:01:07.402444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.402497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.402522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.402535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.402861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.402896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.402918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.402930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.403373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.403405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.403426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.403438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.403826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.403861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.403883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.599 [2024-11-17 09:01:07.403896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:30.599 passed 00:10:30.599 Test: blockdev nvme passthru rw ...passed 00:10:30.599 Test: blockdev nvme passthru vendor specific ...[2024-11-17 09:01:07.404765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.599 [2024-11-17 09:01:07.404797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.404922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.599 [2024-11-17 09:01:07.404950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.405068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.599 [2024-11-17 09:01:07.405103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:30.599 [2024-11-17 09:01:07.405233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.599 [2024-11-17 09:01:07.405271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:30.599 passed 00:10:30.599 Test: blockdev nvme admin passthru ...passed 00:10:30.599 Test: blockdev copy ...passed 00:10:30.599 00:10:30.599 Run Summary: Type Total Ran Passed Failed Inactive 00:10:30.599 suites 1 1 n/a 0 0 00:10:30.599 tests 23 23 23 0 0 00:10:30.599 asserts 152 152 152 0 n/a 00:10:30.599 00:10:30.599 Elapsed time = 0.177 seconds 00:10:30.858 09:01:07 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.858 09:01:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.858 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:10:30.858 09:01:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.858 09:01:07 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:30.858 09:01:07 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:30.858 09:01:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:30.858 09:01:07 -- nvmf/common.sh@116 -- # sync 00:10:31.117 09:01:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:31.117 09:01:07 -- nvmf/common.sh@119 -- # set +e 00:10:31.117 09:01:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:31.117 09:01:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:31.117 rmmod nvme_tcp 00:10:31.117 rmmod nvme_fabrics 00:10:31.117 rmmod nvme_keyring 00:10:31.117 09:01:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:31.117 09:01:07 -- nvmf/common.sh@123 -- # set -e 00:10:31.117 09:01:07 -- nvmf/common.sh@124 -- # return 0 00:10:31.117 09:01:07 -- nvmf/common.sh@477 -- # '[' -n 64404 ']' 00:10:31.117 09:01:07 -- nvmf/common.sh@478 -- # killprocess 64404 00:10:31.117 09:01:07 -- common/autotest_common.sh@936 -- # '[' -z 64404 ']' 00:10:31.117 09:01:07 -- common/autotest_common.sh@940 -- # kill -0 64404 00:10:31.117 09:01:07 -- common/autotest_common.sh@941 -- # uname 00:10:31.117 09:01:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:31.117 09:01:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64404 00:10:31.117 09:01:07 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:31.117 09:01:07 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:31.117 killing process with pid 64404 00:10:31.117 09:01:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64404' 00:10:31.117 09:01:07 -- common/autotest_common.sh@955 -- # kill 64404 00:10:31.118 09:01:07 -- common/autotest_common.sh@960 -- # wait 64404 00:10:31.376 09:01:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:31.376 09:01:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:31.376 09:01:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:31.376 09:01:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.376 09:01:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:31.376 09:01:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.377 09:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.377 09:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.377 09:01:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:31.377 00:10:31.377 real 0m3.172s 00:10:31.377 user 0m10.441s 00:10:31.377 sys 0m1.155s 00:10:31.377 09:01:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:31.377 09:01:08 -- common/autotest_common.sh@10 -- # set +x 00:10:31.377 ************************************ 00:10:31.377 END TEST nvmf_bdevio_no_huge 00:10:31.377 ************************************ 00:10:31.637 09:01:08 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:31.637 09:01:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.637 09:01:08 -- common/autotest_common.sh@10 -- # set +x 00:10:31.637 ************************************ 00:10:31.637 START TEST nvmf_tls 00:10:31.637 ************************************ 00:10:31.637 09:01:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:31.637 * Looking for test storage... 00:10:31.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:31.637 09:01:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:31.637 09:01:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:31.637 09:01:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:31.637 09:01:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:31.637 09:01:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:31.637 09:01:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:31.637 09:01:08 -- scripts/common.sh@335 -- # IFS=.-: 00:10:31.637 09:01:08 -- scripts/common.sh@335 -- # read -ra ver1 00:10:31.637 09:01:08 -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.637 09:01:08 -- scripts/common.sh@336 -- # read -ra ver2 00:10:31.637 09:01:08 -- scripts/common.sh@337 -- # local 'op=<' 00:10:31.637 09:01:08 -- scripts/common.sh@339 -- # ver1_l=2 00:10:31.637 09:01:08 -- scripts/common.sh@340 -- # ver2_l=1 00:10:31.637 09:01:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:31.637 09:01:08 -- scripts/common.sh@343 -- # case "$op" in 00:10:31.637 09:01:08 -- scripts/common.sh@344 -- # : 1 00:10:31.637 09:01:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:31.637 09:01:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.637 09:01:08 -- scripts/common.sh@364 -- # decimal 1 00:10:31.637 09:01:08 -- scripts/common.sh@352 -- # local d=1 00:10:31.637 09:01:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.637 09:01:08 -- scripts/common.sh@354 -- # echo 1 00:10:31.637 09:01:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:31.637 09:01:08 -- scripts/common.sh@365 -- # decimal 2 00:10:31.637 09:01:08 -- scripts/common.sh@352 -- # local d=2 00:10:31.637 09:01:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.637 09:01:08 -- scripts/common.sh@354 -- # echo 2 00:10:31.637 09:01:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:31.637 09:01:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:31.637 09:01:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:31.637 09:01:08 -- scripts/common.sh@367 -- # return 0 00:10:31.637 09:01:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.637 --rc genhtml_branch_coverage=1 00:10:31.637 --rc genhtml_function_coverage=1 00:10:31.637 --rc genhtml_legend=1 00:10:31.637 --rc geninfo_all_blocks=1 00:10:31.637 --rc geninfo_unexecuted_blocks=1 00:10:31.637 00:10:31.637 ' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.637 --rc genhtml_branch_coverage=1 00:10:31.637 --rc genhtml_function_coverage=1 00:10:31.637 --rc genhtml_legend=1 00:10:31.637 --rc geninfo_all_blocks=1 00:10:31.637 --rc geninfo_unexecuted_blocks=1 00:10:31.637 00:10:31.637 ' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.637 --rc genhtml_branch_coverage=1 00:10:31.637 --rc genhtml_function_coverage=1 00:10:31.637 --rc genhtml_legend=1 00:10:31.637 --rc geninfo_all_blocks=1 00:10:31.637 --rc geninfo_unexecuted_blocks=1 00:10:31.637 00:10:31.637 ' 00:10:31.637 09:01:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.637 --rc genhtml_branch_coverage=1 00:10:31.637 --rc genhtml_function_coverage=1 00:10:31.637 --rc genhtml_legend=1 00:10:31.637 --rc geninfo_all_blocks=1 00:10:31.637 --rc geninfo_unexecuted_blocks=1 00:10:31.637 00:10:31.637 ' 00:10:31.637 09:01:08 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:31.637 09:01:08 -- nvmf/common.sh@7 -- # uname -s 00:10:31.637 09:01:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.637 09:01:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.637 09:01:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.637 09:01:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.637 09:01:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.637 09:01:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.637 09:01:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.637 09:01:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.637 09:01:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.637 09:01:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.637 09:01:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:31.637 09:01:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:10:31.637 09:01:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.637 09:01:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.637 09:01:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:31.637 09:01:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.637 09:01:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.637 09:01:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.637 09:01:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.638 09:01:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.638 09:01:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.638 09:01:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.638 09:01:08 -- paths/export.sh@5 -- # export PATH 00:10:31.638 09:01:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.638 09:01:08 -- nvmf/common.sh@46 -- # : 0 00:10:31.638 09:01:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:31.638 09:01:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:31.638 09:01:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:31.638 09:01:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.638 09:01:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.638 09:01:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:31.638 09:01:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:31.638 09:01:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:31.638 09:01:08 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.638 09:01:08 -- target/tls.sh@71 -- # nvmftestinit 00:10:31.638 09:01:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:31.638 09:01:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.638 09:01:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:31.638 09:01:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:31.638 09:01:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:31.638 09:01:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.638 09:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.638 09:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.638 09:01:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:31.638 09:01:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:31.638 09:01:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:31.638 09:01:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:31.638 09:01:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:31.638 09:01:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:31.638 09:01:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.638 09:01:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.638 09:01:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:31.638 09:01:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:31.638 09:01:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:31.638 09:01:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:31.638 09:01:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:31.638 09:01:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.638 09:01:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:31.638 09:01:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:31.638 09:01:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:31.638 09:01:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:31.638 09:01:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:31.638 09:01:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:31.638 Cannot find device "nvmf_tgt_br" 00:10:31.638 09:01:08 -- nvmf/common.sh@154 -- # true 00:10:31.638 09:01:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:31.638 Cannot find device "nvmf_tgt_br2" 00:10:31.638 09:01:08 -- nvmf/common.sh@155 -- # true 00:10:31.638 09:01:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:31.638 09:01:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:31.897 Cannot find device "nvmf_tgt_br" 00:10:31.897 09:01:08 -- nvmf/common.sh@157 -- # true 00:10:31.897 09:01:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:31.897 Cannot find device "nvmf_tgt_br2" 00:10:31.897 09:01:08 -- nvmf/common.sh@158 -- # true 00:10:31.897 09:01:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:31.897 09:01:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:31.897 09:01:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.897 09:01:08 -- nvmf/common.sh@161 -- # true 00:10:31.897 09:01:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.897 09:01:08 -- nvmf/common.sh@162 -- # true 00:10:31.897 09:01:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.897 09:01:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.897 09:01:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.897 09:01:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.897 09:01:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.897 09:01:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.897 09:01:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.897 09:01:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:31.897 09:01:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:31.897 09:01:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:31.897 09:01:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:31.897 09:01:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:31.897 09:01:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:31.897 09:01:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.897 09:01:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.897 09:01:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.897 09:01:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:31.897 09:01:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:31.897 09:01:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.897 09:01:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.897 09:01:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.897 09:01:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.897 09:01:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.897 09:01:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:32.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:32.243 00:10:32.243 --- 10.0.0.2 ping statistics --- 00:10:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.243 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:32.243 09:01:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:32.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:32.243 00:10:32.243 --- 10.0.0.3 ping statistics --- 00:10:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.243 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:32.243 09:01:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:32.243 00:10:32.243 --- 10.0.0.1 ping statistics --- 00:10:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.243 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:32.243 09:01:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.243 09:01:08 -- nvmf/common.sh@421 -- # return 0 00:10:32.243 09:01:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:32.243 09:01:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.243 09:01:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:32.243 09:01:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:32.243 09:01:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.243 09:01:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:32.243 09:01:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:32.243 09:01:08 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:32.243 09:01:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:32.243 09:01:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.243 09:01:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.243 09:01:08 -- nvmf/common.sh@469 -- # nvmfpid=64628 00:10:32.243 09:01:08 -- nvmf/common.sh@470 -- # waitforlisten 64628 00:10:32.243 09:01:08 -- common/autotest_common.sh@829 -- # '[' -z 64628 ']' 00:10:32.243 09:01:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:32.243 09:01:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.243 09:01:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.243 09:01:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.243 09:01:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.243 09:01:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.243 [2024-11-17 09:01:08.920464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.243 [2024-11-17 09:01:08.920569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.243 [2024-11-17 09:01:09.059467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.243 [2024-11-17 09:01:09.116968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.243 [2024-11-17 09:01:09.117358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.243 [2024-11-17 09:01:09.117391] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.243 [2024-11-17 09:01:09.117401] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.243 [2024-11-17 09:01:09.117428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.201 09:01:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.201 09:01:09 -- common/autotest_common.sh@862 -- # return 0 00:10:33.201 09:01:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.201 09:01:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.201 09:01:09 -- common/autotest_common.sh@10 -- # set +x 00:10:33.201 09:01:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.201 09:01:09 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:33.201 09:01:09 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:33.460 true 00:10:33.460 09:01:10 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:33.460 09:01:10 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:33.720 09:01:10 -- target/tls.sh@82 -- # version=0 00:10:33.720 09:01:10 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:33.720 09:01:10 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:33.979 09:01:10 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:33.979 09:01:10 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:33.979 09:01:10 -- target/tls.sh@90 -- # version=13 00:10:33.979 09:01:10 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:33.979 09:01:10 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:34.238 09:01:11 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:34.238 09:01:11 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:34.498 09:01:11 -- target/tls.sh@98 -- # version=7 00:10:34.498 09:01:11 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:34.498 09:01:11 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:34.498 09:01:11 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:34.756 09:01:11 -- target/tls.sh@105 -- # ktls=false 00:10:34.756 09:01:11 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:34.756 09:01:11 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:35.015 09:01:11 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:35.015 09:01:11 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:35.275 09:01:12 -- target/tls.sh@113 -- # ktls=true 00:10:35.275 09:01:12 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:35.275 09:01:12 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:35.534 09:01:12 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:35.534 09:01:12 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:35.793 09:01:12 -- target/tls.sh@121 -- # ktls=false 00:10:35.793 09:01:12 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:35.793 09:01:12 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:35.793 09:01:12 -- target/tls.sh@49 -- # local key hash crc 00:10:35.793 09:01:12 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:35.793 09:01:12 -- target/tls.sh@51 -- # hash=01 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # head -c 4 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # tail -c8 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # gzip -1 -c 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # crc='p$H�' 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:35.793 09:01:12 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:35.793 09:01:12 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:35.793 09:01:12 -- target/tls.sh@49 -- # local key hash crc 00:10:35.793 09:01:12 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:35.793 09:01:12 -- target/tls.sh@51 -- # hash=01 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # gzip -1 -c 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # head -c 4 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # tail -c8 00:10:35.793 09:01:12 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:35.793 09:01:12 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:35.793 09:01:12 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:35.793 09:01:12 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:35.793 09:01:12 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:35.793 09:01:12 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:35.793 09:01:12 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:35.793 09:01:12 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:35.793 09:01:12 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:35.793 09:01:12 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:36.362 09:01:12 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:36.621 09:01:13 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:36.621 09:01:13 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:36.621 09:01:13 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:36.880 [2024-11-17 09:01:13.590742] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.880 09:01:13 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:37.139 09:01:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:37.139 [2024-11-17 09:01:14.030879] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:37.139 [2024-11-17 09:01:14.031132] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.139 09:01:14 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:37.398 malloc0 00:10:37.398 09:01:14 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:37.656 09:01:14 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:37.915 09:01:14 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.122 Initializing NVMe Controllers 00:10:50.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:50.122 Initialization complete. Launching workers. 00:10:50.122 ======================================================== 00:10:50.122 Latency(us) 00:10:50.122 Device Information : IOPS MiB/s Average min max 00:10:50.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11366.65 44.40 5631.64 1442.55 9252.94 00:10:50.122 ======================================================== 00:10:50.122 Total : 11366.65 44.40 5631.64 1442.55 9252.94 00:10:50.122 00:10:50.122 09:01:24 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.122 09:01:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:50.122 09:01:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:50.122 09:01:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:50.122 09:01:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:50.122 09:01:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:50.122 09:01:24 -- target/tls.sh@28 -- # bdevperf_pid=64871 00:10:50.122 09:01:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:50.122 09:01:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:50.122 09:01:24 -- target/tls.sh@31 -- # waitforlisten 64871 /var/tmp/bdevperf.sock 00:10:50.122 09:01:24 -- common/autotest_common.sh@829 -- # '[' -z 64871 ']' 00:10:50.122 09:01:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.122 09:01:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.122 09:01:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.122 09:01:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.122 09:01:24 -- common/autotest_common.sh@10 -- # set +x 00:10:50.122 [2024-11-17 09:01:24.953635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.122 [2024-11-17 09:01:24.954047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64871 ] 00:10:50.122 [2024-11-17 09:01:25.095296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.122 [2024-11-17 09:01:25.165096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.122 09:01:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.122 09:01:25 -- common/autotest_common.sh@862 -- # return 0 00:10:50.122 09:01:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.122 [2024-11-17 09:01:26.131980] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:50.122 TLSTESTn1 00:10:50.122 09:01:26 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:50.122 Running I/O for 10 seconds... 00:11:00.122 00:11:00.122 Latency(us) 00:11:00.122 [2024-11-17T09:01:37.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.122 [2024-11-17T09:01:37.052Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:00.122 Verification LBA range: start 0x0 length 0x2000 00:11:00.122 TLSTESTn1 : 10.01 6367.86 24.87 0.00 0.00 20066.49 6136.55 24307.90 00:11:00.122 [2024-11-17T09:01:37.052Z] =================================================================================================================== 00:11:00.122 [2024-11-17T09:01:37.052Z] Total : 6367.86 24.87 0.00 0.00 20066.49 6136.55 24307.90 00:11:00.122 0 00:11:00.122 09:01:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:00.122 09:01:36 -- target/tls.sh@45 -- # killprocess 64871 00:11:00.122 09:01:36 -- common/autotest_common.sh@936 -- # '[' -z 64871 ']' 00:11:00.122 09:01:36 -- common/autotest_common.sh@940 -- # kill -0 64871 00:11:00.122 09:01:36 -- common/autotest_common.sh@941 -- # uname 00:11:00.122 09:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:00.122 09:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64871 00:11:00.122 killing process with pid 64871 00:11:00.122 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.122 00:11:00.122 Latency(us) 00:11:00.122 [2024-11-17T09:01:37.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.122 [2024-11-17T09:01:37.052Z] =================================================================================================================== 00:11:00.122 [2024-11-17T09:01:37.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.122 09:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:00.122 09:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:00.122 09:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64871' 00:11:00.122 09:01:36 -- common/autotest_common.sh@955 -- # kill 64871 00:11:00.122 09:01:36 -- common/autotest_common.sh@960 -- # wait 64871 00:11:00.122 09:01:36 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.122 09:01:36 -- common/autotest_common.sh@650 -- # local es=0 00:11:00.122 09:01:36 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.122 09:01:36 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:00.122 09:01:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.122 09:01:36 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:00.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:00.122 09:01:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.122 09:01:36 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.122 09:01:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:00.122 09:01:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:00.122 09:01:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:00.122 09:01:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:00.122 09:01:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:00.122 09:01:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:00.122 09:01:36 -- target/tls.sh@28 -- # bdevperf_pid=65004 00:11:00.122 09:01:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:00.122 09:01:36 -- target/tls.sh@31 -- # waitforlisten 65004 /var/tmp/bdevperf.sock 00:11:00.122 09:01:36 -- common/autotest_common.sh@829 -- # '[' -z 65004 ']' 00:11:00.122 09:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:00.122 09:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.122 09:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:00.122 09:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.122 09:01:36 -- common/autotest_common.sh@10 -- # set +x 00:11:00.122 [2024-11-17 09:01:36.603297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:00.122 [2024-11-17 09:01:36.603636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65004 ] 00:11:00.122 [2024-11-17 09:01:36.734232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.122 [2024-11-17 09:01:36.785262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.690 09:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.690 09:01:37 -- common/autotest_common.sh@862 -- # return 0 00:11:00.690 09:01:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:00.949 [2024-11-17 09:01:37.816502] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:00.949 [2024-11-17 09:01:37.823194] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:00.950 [2024-11-17 09:01:37.824037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15de650 (107): Transport endpoint is not connected 00:11:00.950 [2024-11-17 09:01:37.825045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15de650 (9): Bad file descriptor 00:11:00.950 [2024-11-17 09:01:37.826042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:00.950 [2024-11-17 09:01:37.826065] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:00.950 [2024-11-17 09:01:37.826094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:00.950 request: 00:11:00.950 { 00:11:00.950 "name": "TLSTEST", 00:11:00.950 "trtype": "tcp", 00:11:00.950 "traddr": "10.0.0.2", 00:11:00.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.950 "adrfam": "ipv4", 00:11:00.950 "trsvcid": "4420", 00:11:00.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.950 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:00.950 "method": "bdev_nvme_attach_controller", 00:11:00.950 "req_id": 1 00:11:00.950 } 00:11:00.950 Got JSON-RPC error response 00:11:00.950 response: 00:11:00.950 { 00:11:00.950 "code": -32602, 00:11:00.950 "message": "Invalid parameters" 00:11:00.950 } 00:11:00.950 09:01:37 -- target/tls.sh@36 -- # killprocess 65004 00:11:00.950 09:01:37 -- common/autotest_common.sh@936 -- # '[' -z 65004 ']' 00:11:00.950 09:01:37 -- common/autotest_common.sh@940 -- # kill -0 65004 00:11:00.950 09:01:37 -- common/autotest_common.sh@941 -- # uname 00:11:00.950 09:01:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:00.950 09:01:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65004 00:11:01.209 killing process with pid 65004 00:11:01.209 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.209 00:11:01.209 Latency(us) 00:11:01.209 [2024-11-17T09:01:38.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.209 [2024-11-17T09:01:38.139Z] =================================================================================================================== 00:11:01.209 [2024-11-17T09:01:38.139Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:01.209 09:01:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:01.209 09:01:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:01.209 09:01:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65004' 00:11:01.209 09:01:37 -- common/autotest_common.sh@955 -- # kill 65004 00:11:01.209 09:01:37 -- common/autotest_common.sh@960 -- # wait 65004 00:11:01.209 09:01:38 -- target/tls.sh@37 -- # return 1 00:11:01.209 09:01:38 -- common/autotest_common.sh@653 -- # es=1 00:11:01.209 09:01:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.209 09:01:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:01.209 09:01:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.209 09:01:38 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:01.209 09:01:38 -- common/autotest_common.sh@650 -- # local es=0 00:11:01.209 09:01:38 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:01.209 09:01:38 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:01.209 09:01:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.209 09:01:38 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:01.209 09:01:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.209 09:01:38 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:01.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:01.209 09:01:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:01.209 09:01:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:01.209 09:01:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:01.209 09:01:38 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:01.209 09:01:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:01.209 09:01:38 -- target/tls.sh@28 -- # bdevperf_pid=65032 00:11:01.209 09:01:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:01.209 09:01:38 -- target/tls.sh@31 -- # waitforlisten 65032 /var/tmp/bdevperf.sock 00:11:01.209 09:01:38 -- common/autotest_common.sh@829 -- # '[' -z 65032 ']' 00:11:01.209 09:01:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:01.209 09:01:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.209 09:01:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:01.209 09:01:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.209 09:01:38 -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 09:01:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:01.209 [2024-11-17 09:01:38.102884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:01.209 [2024-11-17 09:01:38.102974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65032 ] 00:11:01.468 [2024-11-17 09:01:38.240554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.468 [2024-11-17 09:01:38.291374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.406 09:01:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.406 09:01:39 -- common/autotest_common.sh@862 -- # return 0 00:11:02.406 09:01:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.406 [2024-11-17 09:01:39.332647] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:02.665 [2024-11-17 09:01:39.343095] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:02.665 [2024-11-17 09:01:39.343319] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:02.665 [2024-11-17 09:01:39.343483] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:02.665 [2024-11-17 09:01:39.344468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59f650 (107): Transport endpoint is not connected 00:11:02.665 [2024-11-17 09:01:39.345461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59f650 (9): Bad file descriptor 00:11:02.665 [2024-11-17 09:01:39.346457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:02.665 [2024-11-17 09:01:39.346633] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:02.665 [2024-11-17 09:01:39.346729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:02.665 request: 00:11:02.665 { 00:11:02.665 "name": "TLSTEST", 00:11:02.665 "trtype": "tcp", 00:11:02.665 "traddr": "10.0.0.2", 00:11:02.665 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:02.665 "adrfam": "ipv4", 00:11:02.665 "trsvcid": "4420", 00:11:02.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.665 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:02.665 "method": "bdev_nvme_attach_controller", 00:11:02.665 "req_id": 1 00:11:02.665 } 00:11:02.665 Got JSON-RPC error response 00:11:02.665 response: 00:11:02.665 { 00:11:02.665 "code": -32602, 00:11:02.665 "message": "Invalid parameters" 00:11:02.665 } 00:11:02.665 09:01:39 -- target/tls.sh@36 -- # killprocess 65032 00:11:02.665 09:01:39 -- common/autotest_common.sh@936 -- # '[' -z 65032 ']' 00:11:02.665 09:01:39 -- common/autotest_common.sh@940 -- # kill -0 65032 00:11:02.665 09:01:39 -- common/autotest_common.sh@941 -- # uname 00:11:02.665 09:01:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.665 09:01:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65032 00:11:02.665 killing process with pid 65032 00:11:02.665 Received shutdown signal, test time was about 10.000000 seconds 00:11:02.665 00:11:02.665 Latency(us) 00:11:02.665 [2024-11-17T09:01:39.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.665 [2024-11-17T09:01:39.595Z] =================================================================================================================== 00:11:02.665 [2024-11-17T09:01:39.595Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:02.665 09:01:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:02.665 09:01:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:02.665 09:01:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65032' 00:11:02.665 09:01:39 -- common/autotest_common.sh@955 -- # kill 65032 00:11:02.665 09:01:39 -- common/autotest_common.sh@960 -- # wait 65032 00:11:02.665 09:01:39 -- target/tls.sh@37 -- # return 1 00:11:02.665 09:01:39 -- common/autotest_common.sh@653 -- # es=1 00:11:02.665 09:01:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.665 09:01:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.665 09:01:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.665 09:01:39 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.665 09:01:39 -- common/autotest_common.sh@650 -- # local es=0 00:11:02.665 09:01:39 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.665 09:01:39 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:02.665 09:01:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.665 09:01:39 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:02.665 09:01:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.665 09:01:39 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.665 09:01:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:02.665 09:01:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:02.665 09:01:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:02.665 09:01:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:02.665 09:01:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.665 09:01:39 -- target/tls.sh@28 -- # bdevperf_pid=65058 00:11:02.665 09:01:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:02.666 09:01:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:02.666 09:01:39 -- target/tls.sh@31 -- # waitforlisten 65058 /var/tmp/bdevperf.sock 00:11:02.666 09:01:39 -- common/autotest_common.sh@829 -- # '[' -z 65058 ']' 00:11:02.666 09:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:02.666 09:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.666 09:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:02.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:02.666 09:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.666 09:01:39 -- common/autotest_common.sh@10 -- # set +x 00:11:02.925 [2024-11-17 09:01:39.624936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.925 [2024-11-17 09:01:39.625290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65058 ] 00:11:02.925 [2024-11-17 09:01:39.754737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.925 [2024-11-17 09:01:39.806865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.862 09:01:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.862 09:01:40 -- common/autotest_common.sh@862 -- # return 0 00:11:03.862 09:01:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:04.121 [2024-11-17 09:01:40.869067] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:04.121 [2024-11-17 09:01:40.874050] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:04.122 [2024-11-17 09:01:40.874266] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:04.122 [2024-11-17 09:01:40.874328] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:04.122 [2024-11-17 09:01:40.874772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c8650 (107): Transport endpoint is not connected 00:11:04.122 [2024-11-17 09:01:40.875746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c8650 (9): Bad file descriptor 00:11:04.122 [2024-11-17 09:01:40.876742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:04.122 [2024-11-17 09:01:40.876767] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:04.122 [2024-11-17 09:01:40.876777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:04.122 request: 00:11:04.122 { 00:11:04.122 "name": "TLSTEST", 00:11:04.122 "trtype": "tcp", 00:11:04.122 "traddr": "10.0.0.2", 00:11:04.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.122 "adrfam": "ipv4", 00:11:04.122 "trsvcid": "4420", 00:11:04.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:04.122 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:04.122 "method": "bdev_nvme_attach_controller", 00:11:04.122 "req_id": 1 00:11:04.122 } 00:11:04.122 Got JSON-RPC error response 00:11:04.122 response: 00:11:04.122 { 00:11:04.122 "code": -32602, 00:11:04.122 "message": "Invalid parameters" 00:11:04.122 } 00:11:04.122 09:01:40 -- target/tls.sh@36 -- # killprocess 65058 00:11:04.122 09:01:40 -- common/autotest_common.sh@936 -- # '[' -z 65058 ']' 00:11:04.122 09:01:40 -- common/autotest_common.sh@940 -- # kill -0 65058 00:11:04.122 09:01:40 -- common/autotest_common.sh@941 -- # uname 00:11:04.122 09:01:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.122 09:01:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65058 00:11:04.122 killing process with pid 65058 00:11:04.122 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.122 00:11:04.122 Latency(us) 00:11:04.122 [2024-11-17T09:01:41.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.122 [2024-11-17T09:01:41.052Z] =================================================================================================================== 00:11:04.122 [2024-11-17T09:01:41.052Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:04.122 09:01:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:04.122 09:01:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:04.122 09:01:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65058' 00:11:04.122 09:01:40 -- common/autotest_common.sh@955 -- # kill 65058 00:11:04.122 09:01:40 -- common/autotest_common.sh@960 -- # wait 65058 00:11:04.381 09:01:41 -- target/tls.sh@37 -- # return 1 00:11:04.381 09:01:41 -- common/autotest_common.sh@653 -- # es=1 00:11:04.381 09:01:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:04.381 09:01:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:04.381 09:01:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:04.381 09:01:41 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:04.381 09:01:41 -- common/autotest_common.sh@650 -- # local es=0 00:11:04.381 09:01:41 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:04.381 09:01:41 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:04.381 09:01:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.381 09:01:41 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:04.381 09:01:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.381 09:01:41 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:04.381 09:01:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:04.381 09:01:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:04.381 09:01:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:04.381 09:01:41 -- target/tls.sh@23 -- # psk= 00:11:04.381 09:01:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:04.381 09:01:41 -- target/tls.sh@28 -- # bdevperf_pid=65087 00:11:04.381 09:01:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:04.381 09:01:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:04.381 09:01:41 -- target/tls.sh@31 -- # waitforlisten 65087 /var/tmp/bdevperf.sock 00:11:04.381 09:01:41 -- common/autotest_common.sh@829 -- # '[' -z 65087 ']' 00:11:04.381 09:01:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:04.381 09:01:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.381 09:01:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:04.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:04.382 09:01:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.382 09:01:41 -- common/autotest_common.sh@10 -- # set +x 00:11:04.382 [2024-11-17 09:01:41.144405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.382 [2024-11-17 09:01:41.144678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65087 ] 00:11:04.382 [2024-11-17 09:01:41.278235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.641 [2024-11-17 09:01:41.330236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.641 09:01:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.641 09:01:41 -- common/autotest_common.sh@862 -- # return 0 00:11:04.641 09:01:41 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:04.900 [2024-11-17 09:01:41.643487] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:04.900 [2024-11-17 09:01:41.645202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3010 (9): Bad file descriptor 00:11:04.900 [2024-11-17 09:01:41.646199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:04.900 [2024-11-17 09:01:41.646395] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:04.900 [2024-11-17 09:01:41.646493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:04.900 request: 00:11:04.900 { 00:11:04.900 "name": "TLSTEST", 00:11:04.900 "trtype": "tcp", 00:11:04.901 "traddr": "10.0.0.2", 00:11:04.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.901 "adrfam": "ipv4", 00:11:04.901 "trsvcid": "4420", 00:11:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.901 "method": "bdev_nvme_attach_controller", 00:11:04.901 "req_id": 1 00:11:04.901 } 00:11:04.901 Got JSON-RPC error response 00:11:04.901 response: 00:11:04.901 { 00:11:04.901 "code": -32602, 00:11:04.901 "message": "Invalid parameters" 00:11:04.901 } 00:11:04.901 09:01:41 -- target/tls.sh@36 -- # killprocess 65087 00:11:04.901 09:01:41 -- common/autotest_common.sh@936 -- # '[' -z 65087 ']' 00:11:04.901 09:01:41 -- common/autotest_common.sh@940 -- # kill -0 65087 00:11:04.901 09:01:41 -- common/autotest_common.sh@941 -- # uname 00:11:04.901 09:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.901 09:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65087 00:11:04.901 killing process with pid 65087 00:11:04.901 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.901 00:11:04.901 Latency(us) 00:11:04.901 [2024-11-17T09:01:41.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.901 [2024-11-17T09:01:41.831Z] =================================================================================================================== 00:11:04.901 [2024-11-17T09:01:41.831Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:04.901 09:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:04.901 09:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:04.901 09:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65087' 00:11:04.901 09:01:41 -- common/autotest_common.sh@955 -- # kill 65087 00:11:04.901 09:01:41 -- common/autotest_common.sh@960 -- # wait 65087 00:11:05.159 09:01:41 -- target/tls.sh@37 -- # return 1 00:11:05.159 09:01:41 -- common/autotest_common.sh@653 -- # es=1 00:11:05.159 09:01:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:05.159 09:01:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:05.159 09:01:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:05.159 09:01:41 -- target/tls.sh@167 -- # killprocess 64628 00:11:05.159 09:01:41 -- common/autotest_common.sh@936 -- # '[' -z 64628 ']' 00:11:05.159 09:01:41 -- common/autotest_common.sh@940 -- # kill -0 64628 00:11:05.159 09:01:41 -- common/autotest_common.sh@941 -- # uname 00:11:05.159 09:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.159 09:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64628 00:11:05.159 killing process with pid 64628 00:11:05.159 09:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:05.159 09:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:05.159 09:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64628' 00:11:05.159 09:01:41 -- common/autotest_common.sh@955 -- # kill 64628 00:11:05.159 09:01:41 -- common/autotest_common.sh@960 -- # wait 64628 00:11:05.418 09:01:42 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:05.418 09:01:42 -- target/tls.sh@49 -- # local key hash crc 00:11:05.418 09:01:42 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:05.418 09:01:42 -- target/tls.sh@51 -- # hash=02 00:11:05.418 09:01:42 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:05.418 09:01:42 -- target/tls.sh@52 -- # gzip -1 -c 00:11:05.418 09:01:42 -- target/tls.sh@52 -- # tail -c8 00:11:05.418 09:01:42 -- target/tls.sh@52 -- # head -c 4 00:11:05.418 09:01:42 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:05.418 09:01:42 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:05.418 09:01:42 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:05.418 09:01:42 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:05.418 09:01:42 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:05.418 09:01:42 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:05.418 09:01:42 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:05.418 09:01:42 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:05.418 09:01:42 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:05.418 09:01:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:05.418 09:01:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.418 09:01:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.418 09:01:42 -- nvmf/common.sh@469 -- # nvmfpid=65122 00:11:05.418 09:01:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:05.418 09:01:42 -- nvmf/common.sh@470 -- # waitforlisten 65122 00:11:05.418 09:01:42 -- common/autotest_common.sh@829 -- # '[' -z 65122 ']' 00:11:05.418 09:01:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.418 09:01:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.418 09:01:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.418 09:01:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.418 09:01:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.418 [2024-11-17 09:01:42.157988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.418 [2024-11-17 09:01:42.158223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.418 [2024-11-17 09:01:42.290010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.418 [2024-11-17 09:01:42.339004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.418 [2024-11-17 09:01:42.339163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.418 [2024-11-17 09:01:42.339175] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.418 [2024-11-17 09:01:42.339183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.418 [2024-11-17 09:01:42.339210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.354 09:01:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.354 09:01:43 -- common/autotest_common.sh@862 -- # return 0 00:11:06.354 09:01:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.354 09:01:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.354 09:01:43 -- common/autotest_common.sh@10 -- # set +x 00:11:06.354 09:01:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.354 09:01:43 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.354 09:01:43 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.354 09:01:43 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:06.613 [2024-11-17 09:01:43.418947] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.613 09:01:43 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:06.873 09:01:43 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:07.132 [2024-11-17 09:01:43.907111] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:07.132 [2024-11-17 09:01:43.907368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.132 09:01:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:07.392 malloc0 00:11:07.392 09:01:44 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:07.651 09:01:44 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:07.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.651 09:01:44 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:07.651 09:01:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:07.651 09:01:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:07.651 09:01:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:07.651 09:01:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:07.651 09:01:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:07.651 09:01:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:07.651 09:01:44 -- target/tls.sh@28 -- # bdevperf_pid=65171 00:11:07.651 09:01:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:07.651 09:01:44 -- target/tls.sh@31 -- # waitforlisten 65171 /var/tmp/bdevperf.sock 00:11:07.651 09:01:44 -- common/autotest_common.sh@829 -- # '[' -z 65171 ']' 00:11:07.651 09:01:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.651 09:01:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.651 09:01:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.651 09:01:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.651 09:01:44 -- common/autotest_common.sh@10 -- # set +x 00:11:07.911 [2024-11-17 09:01:44.605175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.911 [2024-11-17 09:01:44.605421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65171 ] 00:11:07.911 [2024-11-17 09:01:44.741499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.911 [2024-11-17 09:01:44.810679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.848 09:01:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.848 09:01:45 -- common/autotest_common.sh@862 -- # return 0 00:11:08.848 09:01:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:08.848 [2024-11-17 09:01:45.741286] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:09.107 TLSTESTn1 00:11:09.107 09:01:45 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:09.107 Running I/O for 10 seconds... 00:11:19.130 00:11:19.130 Latency(us) 00:11:19.130 [2024-11-17T09:01:56.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.130 [2024-11-17T09:01:56.060Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:19.130 Verification LBA range: start 0x0 length 0x2000 00:11:19.130 TLSTESTn1 : 10.01 6104.74 23.85 0.00 0.00 20935.39 4319.42 266910.25 00:11:19.130 [2024-11-17T09:01:56.060Z] =================================================================================================================== 00:11:19.130 [2024-11-17T09:01:56.060Z] Total : 6104.74 23.85 0.00 0.00 20935.39 4319.42 266910.25 00:11:19.130 0 00:11:19.130 09:01:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:19.130 09:01:55 -- target/tls.sh@45 -- # killprocess 65171 00:11:19.130 09:01:55 -- common/autotest_common.sh@936 -- # '[' -z 65171 ']' 00:11:19.130 09:01:55 -- common/autotest_common.sh@940 -- # kill -0 65171 00:11:19.130 09:01:55 -- common/autotest_common.sh@941 -- # uname 00:11:19.130 09:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:19.130 09:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65171 00:11:19.130 killing process with pid 65171 00:11:19.130 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.130 00:11:19.130 Latency(us) 00:11:19.130 [2024-11-17T09:01:56.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.130 [2024-11-17T09:01:56.060Z] =================================================================================================================== 00:11:19.130 [2024-11-17T09:01:56.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:19.130 09:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:19.130 09:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:19.130 09:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65171' 00:11:19.130 09:01:55 -- common/autotest_common.sh@955 -- # kill 65171 00:11:19.130 09:01:55 -- common/autotest_common.sh@960 -- # wait 65171 00:11:19.390 09:01:56 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:19.390 09:01:56 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:19.390 09:01:56 -- common/autotest_common.sh@650 -- # local es=0 00:11:19.390 09:01:56 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:19.390 09:01:56 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:19.390 09:01:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.390 09:01:56 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:19.390 09:01:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.390 09:01:56 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:19.390 09:01:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:19.390 09:01:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:19.390 09:01:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:19.390 09:01:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:19.390 09:01:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:19.390 09:01:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:19.390 09:01:56 -- target/tls.sh@28 -- # bdevperf_pid=65305 00:11:19.390 09:01:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:19.390 09:01:56 -- target/tls.sh@31 -- # waitforlisten 65305 /var/tmp/bdevperf.sock 00:11:19.390 09:01:56 -- common/autotest_common.sh@829 -- # '[' -z 65305 ']' 00:11:19.390 09:01:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.390 09:01:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.390 09:01:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.390 09:01:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.390 09:01:56 -- common/autotest_common.sh@10 -- # set +x 00:11:19.390 [2024-11-17 09:01:56.223515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:19.390 [2024-11-17 09:01:56.223759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65305 ] 00:11:19.649 [2024-11-17 09:01:56.352461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.649 [2024-11-17 09:01:56.406077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.586 09:01:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.586 09:01:57 -- common/autotest_common.sh@862 -- # return 0 00:11:20.586 09:01:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:20.586 [2024-11-17 09:01:57.477913] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:20.586 [2024-11-17 09:01:57.478179] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:20.586 request: 00:11:20.586 { 00:11:20.586 "name": "TLSTEST", 00:11:20.586 "trtype": "tcp", 00:11:20.586 "traddr": "10.0.0.2", 00:11:20.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:20.586 "adrfam": "ipv4", 00:11:20.586 "trsvcid": "4420", 00:11:20.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.586 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:20.586 "method": "bdev_nvme_attach_controller", 00:11:20.586 "req_id": 1 00:11:20.586 } 00:11:20.586 Got JSON-RPC error response 00:11:20.586 response: 00:11:20.586 { 00:11:20.586 "code": -22, 00:11:20.586 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:20.586 } 00:11:20.586 09:01:57 -- target/tls.sh@36 -- # killprocess 65305 00:11:20.586 09:01:57 -- common/autotest_common.sh@936 -- # '[' -z 65305 ']' 00:11:20.586 09:01:57 -- common/autotest_common.sh@940 -- # kill -0 65305 00:11:20.586 09:01:57 -- common/autotest_common.sh@941 -- # uname 00:11:20.586 09:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.586 09:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65305 00:11:20.844 09:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:20.844 09:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:20.844 09:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65305' 00:11:20.844 killing process with pid 65305 00:11:20.844 09:01:57 -- common/autotest_common.sh@955 -- # kill 65305 00:11:20.844 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.844 00:11:20.844 Latency(us) 00:11:20.844 [2024-11-17T09:01:57.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.845 [2024-11-17T09:01:57.775Z] =================================================================================================================== 00:11:20.845 [2024-11-17T09:01:57.775Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:20.845 09:01:57 -- common/autotest_common.sh@960 -- # wait 65305 00:11:20.845 09:01:57 -- target/tls.sh@37 -- # return 1 00:11:20.845 09:01:57 -- common/autotest_common.sh@653 -- # es=1 00:11:20.845 09:01:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:20.845 09:01:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:20.845 09:01:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:20.845 09:01:57 -- target/tls.sh@183 -- # killprocess 65122 00:11:20.845 09:01:57 -- common/autotest_common.sh@936 -- # '[' -z 65122 ']' 00:11:20.845 09:01:57 -- common/autotest_common.sh@940 -- # kill -0 65122 00:11:20.845 09:01:57 -- common/autotest_common.sh@941 -- # uname 00:11:20.845 09:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.845 09:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65122 00:11:20.845 killing process with pid 65122 00:11:20.845 09:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:20.845 09:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:20.845 09:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65122' 00:11:20.845 09:01:57 -- common/autotest_common.sh@955 -- # kill 65122 00:11:20.845 09:01:57 -- common/autotest_common.sh@960 -- # wait 65122 00:11:21.104 09:01:57 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:21.104 09:01:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:21.104 09:01:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.104 09:01:57 -- common/autotest_common.sh@10 -- # set +x 00:11:21.104 09:01:57 -- nvmf/common.sh@469 -- # nvmfpid=65338 00:11:21.104 09:01:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.104 09:01:57 -- nvmf/common.sh@470 -- # waitforlisten 65338 00:11:21.104 09:01:57 -- common/autotest_common.sh@829 -- # '[' -z 65338 ']' 00:11:21.104 09:01:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.104 09:01:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.104 09:01:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.104 09:01:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.104 09:01:57 -- common/autotest_common.sh@10 -- # set +x 00:11:21.104 [2024-11-17 09:01:57.988573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:21.104 [2024-11-17 09:01:57.988881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.364 [2024-11-17 09:01:58.121209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.364 [2024-11-17 09:01:58.170626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:21.364 [2024-11-17 09:01:58.171017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.364 [2024-11-17 09:01:58.171038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.364 [2024-11-17 09:01:58.171048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.364 [2024-11-17 09:01:58.171079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.301 09:01:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.301 09:01:58 -- common/autotest_common.sh@862 -- # return 0 00:11:22.301 09:01:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:22.301 09:01:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.301 09:01:58 -- common/autotest_common.sh@10 -- # set +x 00:11:22.301 09:01:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.301 09:01:58 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:22.301 09:01:58 -- common/autotest_common.sh@650 -- # local es=0 00:11:22.301 09:01:58 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:22.301 09:01:58 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:22.301 09:01:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.301 09:01:58 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:22.301 09:01:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.301 09:01:58 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:22.301 09:01:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:22.301 09:01:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:22.301 [2024-11-17 09:01:59.181442] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.301 09:01:59 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:22.560 09:01:59 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:22.818 [2024-11-17 09:01:59.617547] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:22.818 [2024-11-17 09:01:59.617831] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.818 09:01:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:23.077 malloc0 00:11:23.077 09:01:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:23.336 09:02:00 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:23.595 [2024-11-17 09:02:00.371898] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:23.596 [2024-11-17 09:02:00.371942] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:23.596 [2024-11-17 09:02:00.371975] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:23.596 request: 00:11:23.596 { 00:11:23.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.596 "host": "nqn.2016-06.io.spdk:host1", 00:11:23.596 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:23.596 "method": "nvmf_subsystem_add_host", 00:11:23.596 "req_id": 1 00:11:23.596 } 00:11:23.596 Got JSON-RPC error response 00:11:23.596 response: 00:11:23.596 { 00:11:23.596 "code": -32603, 00:11:23.596 "message": "Internal error" 00:11:23.596 } 00:11:23.596 09:02:00 -- common/autotest_common.sh@653 -- # es=1 00:11:23.596 09:02:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.596 09:02:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.596 09:02:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.596 09:02:00 -- target/tls.sh@189 -- # killprocess 65338 00:11:23.596 09:02:00 -- common/autotest_common.sh@936 -- # '[' -z 65338 ']' 00:11:23.596 09:02:00 -- common/autotest_common.sh@940 -- # kill -0 65338 00:11:23.596 09:02:00 -- common/autotest_common.sh@941 -- # uname 00:11:23.596 09:02:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.596 09:02:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65338 00:11:23.596 09:02:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:23.596 09:02:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:23.596 09:02:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65338' 00:11:23.596 killing process with pid 65338 00:11:23.596 09:02:00 -- common/autotest_common.sh@955 -- # kill 65338 00:11:23.596 09:02:00 -- common/autotest_common.sh@960 -- # wait 65338 00:11:23.855 09:02:00 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:23.855 09:02:00 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:23.855 09:02:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:23.855 09:02:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.855 09:02:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.855 09:02:00 -- nvmf/common.sh@469 -- # nvmfpid=65406 00:11:23.855 09:02:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:23.855 09:02:00 -- nvmf/common.sh@470 -- # waitforlisten 65406 00:11:23.855 09:02:00 -- common/autotest_common.sh@829 -- # '[' -z 65406 ']' 00:11:23.855 09:02:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.855 09:02:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.855 09:02:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.855 09:02:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.855 09:02:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.855 [2024-11-17 09:02:00.663276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.855 [2024-11-17 09:02:00.663581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.113 [2024-11-17 09:02:00.794716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.114 [2024-11-17 09:02:00.846838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:24.114 [2024-11-17 09:02:00.847209] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.114 [2024-11-17 09:02:00.847258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.114 [2024-11-17 09:02:00.847369] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.114 [2024-11-17 09:02:00.847420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.681 09:02:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.681 09:02:01 -- common/autotest_common.sh@862 -- # return 0 00:11:24.681 09:02:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:24.681 09:02:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.681 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:24.940 09:02:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.940 09:02:01 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:24.940 09:02:01 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:24.940 09:02:01 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:25.199 [2024-11-17 09:02:01.882811] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.199 09:02:01 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:25.458 09:02:02 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:25.458 [2024-11-17 09:02:02.374919] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:25.458 [2024-11-17 09:02:02.375167] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.717 09:02:02 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:25.717 malloc0 00:11:25.976 09:02:02 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:25.976 09:02:02 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:26.235 09:02:03 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:26.235 09:02:03 -- target/tls.sh@197 -- # bdevperf_pid=65455 00:11:26.235 09:02:03 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:26.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.235 09:02:03 -- target/tls.sh@200 -- # waitforlisten 65455 /var/tmp/bdevperf.sock 00:11:26.235 09:02:03 -- common/autotest_common.sh@829 -- # '[' -z 65455 ']' 00:11:26.235 09:02:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.235 09:02:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.235 09:02:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.235 09:02:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.235 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:11:26.235 [2024-11-17 09:02:03.129010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:26.235 [2024-11-17 09:02:03.129262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65455 ] 00:11:26.493 [2024-11-17 09:02:03.264919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.493 [2024-11-17 09:02:03.334302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.428 09:02:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.428 09:02:04 -- common/autotest_common.sh@862 -- # return 0 00:11:27.428 09:02:04 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:27.428 [2024-11-17 09:02:04.304719] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:27.687 TLSTESTn1 00:11:27.687 09:02:04 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:27.946 09:02:04 -- target/tls.sh@205 -- # tgtconf='{ 00:11:27.946 "subsystems": [ 00:11:27.946 { 00:11:27.946 "subsystem": "iobuf", 00:11:27.946 "config": [ 00:11:27.946 { 00:11:27.946 "method": "iobuf_set_options", 00:11:27.946 "params": { 00:11:27.946 "small_pool_count": 8192, 00:11:27.946 "large_pool_count": 1024, 00:11:27.946 "small_bufsize": 8192, 00:11:27.946 "large_bufsize": 135168 00:11:27.946 } 00:11:27.946 } 00:11:27.946 ] 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "subsystem": "sock", 00:11:27.946 "config": [ 00:11:27.946 { 00:11:27.946 "method": "sock_impl_set_options", 00:11:27.946 "params": { 00:11:27.946 "impl_name": "uring", 00:11:27.946 "recv_buf_size": 2097152, 00:11:27.946 "send_buf_size": 2097152, 00:11:27.946 "enable_recv_pipe": true, 00:11:27.946 "enable_quickack": false, 00:11:27.946 "enable_placement_id": 0, 00:11:27.946 "enable_zerocopy_send_server": false, 00:11:27.946 "enable_zerocopy_send_client": false, 00:11:27.946 "zerocopy_threshold": 0, 00:11:27.946 "tls_version": 0, 00:11:27.946 "enable_ktls": false 00:11:27.946 } 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "method": "sock_impl_set_options", 00:11:27.946 "params": { 00:11:27.946 "impl_name": "posix", 00:11:27.946 "recv_buf_size": 2097152, 00:11:27.946 "send_buf_size": 2097152, 00:11:27.946 "enable_recv_pipe": true, 00:11:27.946 "enable_quickack": false, 00:11:27.946 "enable_placement_id": 0, 00:11:27.946 "enable_zerocopy_send_server": true, 00:11:27.946 "enable_zerocopy_send_client": false, 00:11:27.946 "zerocopy_threshold": 0, 00:11:27.946 "tls_version": 0, 00:11:27.946 "enable_ktls": false 00:11:27.946 } 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "method": "sock_impl_set_options", 00:11:27.946 "params": { 00:11:27.946 "impl_name": "ssl", 00:11:27.946 "recv_buf_size": 4096, 00:11:27.946 "send_buf_size": 4096, 00:11:27.946 "enable_recv_pipe": true, 00:11:27.946 "enable_quickack": false, 00:11:27.946 "enable_placement_id": 0, 00:11:27.946 "enable_zerocopy_send_server": true, 00:11:27.946 "enable_zerocopy_send_client": false, 00:11:27.946 "zerocopy_threshold": 0, 00:11:27.946 "tls_version": 0, 00:11:27.946 "enable_ktls": false 00:11:27.946 } 00:11:27.946 } 00:11:27.946 ] 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "subsystem": "vmd", 00:11:27.946 "config": [] 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "subsystem": "accel", 00:11:27.946 "config": [ 00:11:27.946 { 00:11:27.946 "method": "accel_set_options", 00:11:27.946 "params": { 00:11:27.946 "small_cache_size": 128, 00:11:27.946 "large_cache_size": 16, 00:11:27.946 "task_count": 2048, 00:11:27.946 "sequence_count": 2048, 00:11:27.946 "buf_count": 2048 00:11:27.946 } 00:11:27.946 } 00:11:27.946 ] 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "subsystem": "bdev", 00:11:27.946 "config": [ 00:11:27.946 { 00:11:27.946 "method": "bdev_set_options", 00:11:27.946 "params": { 00:11:27.946 "bdev_io_pool_size": 65535, 00:11:27.946 "bdev_io_cache_size": 256, 00:11:27.946 "bdev_auto_examine": true, 00:11:27.946 "iobuf_small_cache_size": 128, 00:11:27.946 "iobuf_large_cache_size": 16 00:11:27.946 } 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "method": "bdev_raid_set_options", 00:11:27.946 "params": { 00:11:27.946 "process_window_size_kb": 1024 00:11:27.946 } 00:11:27.946 }, 00:11:27.946 { 00:11:27.946 "method": "bdev_iscsi_set_options", 00:11:27.946 "params": { 00:11:27.946 "timeout_sec": 30 00:11:27.946 } 00:11:27.946 }, 00:11:27.947 { 00:11:27.947 "method": "bdev_nvme_set_options", 00:11:27.947 "params": { 00:11:27.947 "action_on_timeout": "none", 00:11:27.947 "timeout_us": 0, 00:11:27.947 "timeout_admin_us": 0, 00:11:27.947 "keep_alive_timeout_ms": 10000, 00:11:27.947 "transport_retry_count": 4, 00:11:27.947 "arbitration_burst": 0, 00:11:27.947 "low_priority_weight": 0, 00:11:27.947 "medium_priority_weight": 0, 00:11:27.947 "high_priority_weight": 0, 00:11:27.947 "nvme_adminq_poll_period_us": 10000, 00:11:27.947 "nvme_ioq_poll_period_us": 0, 00:11:27.947 "io_queue_requests": 0, 00:11:27.947 "delay_cmd_submit": true, 00:11:27.947 "bdev_retry_count": 3, 00:11:27.947 "transport_ack_timeout": 0, 00:11:27.947 "ctrlr_loss_timeout_sec": 0, 00:11:27.947 "reconnect_delay_sec": 0, 00:11:27.947 "fast_io_fail_timeout_sec": 0, 00:11:27.947 "generate_uuids": false, 00:11:27.947 "transport_tos": 0, 00:11:27.947 "io_path_stat": false, 00:11:27.947 "allow_accel_sequence": false 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "bdev_nvme_set_hotplug", 00:11:27.947 "params": { 00:11:27.947 "period_us": 100000, 00:11:27.947 "enable": false 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "bdev_malloc_create", 00:11:27.947 "params": { 00:11:27.947 "name": "malloc0", 00:11:27.947 "num_blocks": 8192, 00:11:27.947 "block_size": 4096, 00:11:27.947 "physical_block_size": 4096, 00:11:27.947 "uuid": "6c5f278e-cccf-449f-a6c6-e64a4dd279bd", 00:11:27.947 "optimal_io_boundary": 0 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "bdev_wait_for_examine" 00:11:27.947 } 00:11:27.947 ] 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "subsystem": "nbd", 00:11:27.947 "config": [] 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "subsystem": "scheduler", 00:11:27.947 "config": [ 00:11:27.947 { 00:11:27.947 "method": "framework_set_scheduler", 00:11:27.947 "params": { 00:11:27.947 "name": "static" 00:11:27.947 } 00:11:27.947 } 00:11:27.947 ] 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "subsystem": "nvmf", 00:11:27.947 "config": [ 00:11:27.947 { 00:11:27.947 "method": "nvmf_set_config", 00:11:27.947 "params": { 00:11:27.947 "discovery_filter": "match_any", 00:11:27.947 "admin_cmd_passthru": { 00:11:27.947 "identify_ctrlr": false 00:11:27.947 } 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_set_max_subsystems", 00:11:27.947 "params": { 00:11:27.947 "max_subsystems": 1024 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_set_crdt", 00:11:27.947 "params": { 00:11:27.947 "crdt1": 0, 00:11:27.947 "crdt2": 0, 00:11:27.947 "crdt3": 0 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_create_transport", 00:11:27.947 "params": { 00:11:27.947 "trtype": "TCP", 00:11:27.947 "max_queue_depth": 128, 00:11:27.947 "max_io_qpairs_per_ctrlr": 127, 00:11:27.947 "in_capsule_data_size": 4096, 00:11:27.947 "max_io_size": 131072, 00:11:27.947 "io_unit_size": 131072, 00:11:27.947 "max_aq_depth": 128, 00:11:27.947 "num_shared_buffers": 511, 00:11:27.947 "buf_cache_size": 4294967295, 00:11:27.947 "dif_insert_or_strip": false, 00:11:27.947 "zcopy": false, 00:11:27.947 "c2h_success": false, 00:11:27.947 "sock_priority": 0, 00:11:27.947 "abort_timeout_sec": 1 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_create_subsystem", 00:11:27.947 "params": { 00:11:27.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.947 "allow_any_host": false, 00:11:27.947 "serial_number": "SPDK00000000000001", 00:11:27.947 "model_number": "SPDK bdev Controller", 00:11:27.947 "max_namespaces": 10, 00:11:27.947 "min_cntlid": 1, 00:11:27.947 "max_cntlid": 65519, 00:11:27.947 "ana_reporting": false 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_subsystem_add_host", 00:11:27.947 "params": { 00:11:27.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.947 "host": "nqn.2016-06.io.spdk:host1", 00:11:27.947 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_subsystem_add_ns", 00:11:27.947 "params": { 00:11:27.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.947 "namespace": { 00:11:27.947 "nsid": 1, 00:11:27.947 "bdev_name": "malloc0", 00:11:27.947 "nguid": "6C5F278ECCCF449FA6C6E64A4DD279BD", 00:11:27.947 "uuid": "6c5f278e-cccf-449f-a6c6-e64a4dd279bd" 00:11:27.947 } 00:11:27.947 } 00:11:27.947 }, 00:11:27.947 { 00:11:27.947 "method": "nvmf_subsystem_add_listener", 00:11:27.947 "params": { 00:11:27.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.947 "listen_address": { 00:11:27.947 "trtype": "TCP", 00:11:27.947 "adrfam": "IPv4", 00:11:27.947 "traddr": "10.0.0.2", 00:11:27.947 "trsvcid": "4420" 00:11:27.947 }, 00:11:27.947 "secure_channel": true 00:11:27.947 } 00:11:27.947 } 00:11:27.947 ] 00:11:27.947 } 00:11:27.947 ] 00:11:27.947 }' 00:11:27.947 09:02:04 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:28.207 09:02:05 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:28.207 "subsystems": [ 00:11:28.207 { 00:11:28.207 "subsystem": "iobuf", 00:11:28.207 "config": [ 00:11:28.207 { 00:11:28.207 "method": "iobuf_set_options", 00:11:28.207 "params": { 00:11:28.207 "small_pool_count": 8192, 00:11:28.207 "large_pool_count": 1024, 00:11:28.207 "small_bufsize": 8192, 00:11:28.207 "large_bufsize": 135168 00:11:28.207 } 00:11:28.207 } 00:11:28.207 ] 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "subsystem": "sock", 00:11:28.207 "config": [ 00:11:28.207 { 00:11:28.207 "method": "sock_impl_set_options", 00:11:28.207 "params": { 00:11:28.207 "impl_name": "uring", 00:11:28.207 "recv_buf_size": 2097152, 00:11:28.207 "send_buf_size": 2097152, 00:11:28.207 "enable_recv_pipe": true, 00:11:28.207 "enable_quickack": false, 00:11:28.207 "enable_placement_id": 0, 00:11:28.207 "enable_zerocopy_send_server": false, 00:11:28.207 "enable_zerocopy_send_client": false, 00:11:28.207 "zerocopy_threshold": 0, 00:11:28.207 "tls_version": 0, 00:11:28.207 "enable_ktls": false 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "sock_impl_set_options", 00:11:28.207 "params": { 00:11:28.207 "impl_name": "posix", 00:11:28.207 "recv_buf_size": 2097152, 00:11:28.207 "send_buf_size": 2097152, 00:11:28.207 "enable_recv_pipe": true, 00:11:28.207 "enable_quickack": false, 00:11:28.207 "enable_placement_id": 0, 00:11:28.207 "enable_zerocopy_send_server": true, 00:11:28.207 "enable_zerocopy_send_client": false, 00:11:28.207 "zerocopy_threshold": 0, 00:11:28.207 "tls_version": 0, 00:11:28.207 "enable_ktls": false 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "sock_impl_set_options", 00:11:28.207 "params": { 00:11:28.207 "impl_name": "ssl", 00:11:28.207 "recv_buf_size": 4096, 00:11:28.207 "send_buf_size": 4096, 00:11:28.207 "enable_recv_pipe": true, 00:11:28.207 "enable_quickack": false, 00:11:28.207 "enable_placement_id": 0, 00:11:28.207 "enable_zerocopy_send_server": true, 00:11:28.207 "enable_zerocopy_send_client": false, 00:11:28.207 "zerocopy_threshold": 0, 00:11:28.207 "tls_version": 0, 00:11:28.207 "enable_ktls": false 00:11:28.207 } 00:11:28.207 } 00:11:28.207 ] 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "subsystem": "vmd", 00:11:28.207 "config": [] 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "subsystem": "accel", 00:11:28.207 "config": [ 00:11:28.207 { 00:11:28.207 "method": "accel_set_options", 00:11:28.207 "params": { 00:11:28.207 "small_cache_size": 128, 00:11:28.207 "large_cache_size": 16, 00:11:28.207 "task_count": 2048, 00:11:28.207 "sequence_count": 2048, 00:11:28.207 "buf_count": 2048 00:11:28.207 } 00:11:28.207 } 00:11:28.207 ] 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "subsystem": "bdev", 00:11:28.207 "config": [ 00:11:28.207 { 00:11:28.207 "method": "bdev_set_options", 00:11:28.207 "params": { 00:11:28.207 "bdev_io_pool_size": 65535, 00:11:28.207 "bdev_io_cache_size": 256, 00:11:28.207 "bdev_auto_examine": true, 00:11:28.207 "iobuf_small_cache_size": 128, 00:11:28.207 "iobuf_large_cache_size": 16 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_raid_set_options", 00:11:28.207 "params": { 00:11:28.207 "process_window_size_kb": 1024 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_iscsi_set_options", 00:11:28.207 "params": { 00:11:28.207 "timeout_sec": 30 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_nvme_set_options", 00:11:28.207 "params": { 00:11:28.207 "action_on_timeout": "none", 00:11:28.207 "timeout_us": 0, 00:11:28.207 "timeout_admin_us": 0, 00:11:28.207 "keep_alive_timeout_ms": 10000, 00:11:28.207 "transport_retry_count": 4, 00:11:28.207 "arbitration_burst": 0, 00:11:28.207 "low_priority_weight": 0, 00:11:28.207 "medium_priority_weight": 0, 00:11:28.207 "high_priority_weight": 0, 00:11:28.207 "nvme_adminq_poll_period_us": 10000, 00:11:28.207 "nvme_ioq_poll_period_us": 0, 00:11:28.207 "io_queue_requests": 512, 00:11:28.207 "delay_cmd_submit": true, 00:11:28.207 "bdev_retry_count": 3, 00:11:28.207 "transport_ack_timeout": 0, 00:11:28.207 "ctrlr_loss_timeout_sec": 0, 00:11:28.207 "reconnect_delay_sec": 0, 00:11:28.207 "fast_io_fail_timeout_sec": 0, 00:11:28.207 "generate_uuids": false, 00:11:28.207 "transport_tos": 0, 00:11:28.207 "io_path_stat": false, 00:11:28.207 "allow_accel_sequence": false 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_nvme_attach_controller", 00:11:28.207 "params": { 00:11:28.207 "name": "TLSTEST", 00:11:28.207 "trtype": "TCP", 00:11:28.207 "adrfam": "IPv4", 00:11:28.207 "traddr": "10.0.0.2", 00:11:28.207 "trsvcid": "4420", 00:11:28.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.207 "prchk_reftag": false, 00:11:28.207 "prchk_guard": false, 00:11:28.207 "ctrlr_loss_timeout_sec": 0, 00:11:28.207 "reconnect_delay_sec": 0, 00:11:28.207 "fast_io_fail_timeout_sec": 0, 00:11:28.207 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:28.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.207 "hdgst": false, 00:11:28.207 "ddgst": false 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_nvme_set_hotplug", 00:11:28.207 "params": { 00:11:28.207 "period_us": 100000, 00:11:28.207 "enable": false 00:11:28.207 } 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "method": "bdev_wait_for_examine" 00:11:28.207 } 00:11:28.207 ] 00:11:28.207 }, 00:11:28.207 { 00:11:28.207 "subsystem": "nbd", 00:11:28.207 "config": [] 00:11:28.207 } 00:11:28.207 ] 00:11:28.207 }' 00:11:28.207 09:02:05 -- target/tls.sh@208 -- # killprocess 65455 00:11:28.207 09:02:05 -- common/autotest_common.sh@936 -- # '[' -z 65455 ']' 00:11:28.207 09:02:05 -- common/autotest_common.sh@940 -- # kill -0 65455 00:11:28.207 09:02:05 -- common/autotest_common.sh@941 -- # uname 00:11:28.207 09:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.207 09:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65455 00:11:28.207 killing process with pid 65455 00:11:28.207 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.207 00:11:28.207 Latency(us) 00:11:28.207 [2024-11-17T09:02:05.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.207 [2024-11-17T09:02:05.138Z] =================================================================================================================== 00:11:28.208 [2024-11-17T09:02:05.138Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:28.208 09:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:28.208 09:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:28.208 09:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65455' 00:11:28.208 09:02:05 -- common/autotest_common.sh@955 -- # kill 65455 00:11:28.208 09:02:05 -- common/autotest_common.sh@960 -- # wait 65455 00:11:28.468 09:02:05 -- target/tls.sh@209 -- # killprocess 65406 00:11:28.468 09:02:05 -- common/autotest_common.sh@936 -- # '[' -z 65406 ']' 00:11:28.468 09:02:05 -- common/autotest_common.sh@940 -- # kill -0 65406 00:11:28.468 09:02:05 -- common/autotest_common.sh@941 -- # uname 00:11:28.468 09:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.468 09:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65406 00:11:28.468 killing process with pid 65406 00:11:28.468 09:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:28.468 09:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:28.468 09:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65406' 00:11:28.468 09:02:05 -- common/autotest_common.sh@955 -- # kill 65406 00:11:28.468 09:02:05 -- common/autotest_common.sh@960 -- # wait 65406 00:11:28.735 09:02:05 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:28.735 09:02:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:28.735 09:02:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.735 09:02:05 -- common/autotest_common.sh@10 -- # set +x 00:11:28.735 09:02:05 -- target/tls.sh@212 -- # echo '{ 00:11:28.735 "subsystems": [ 00:11:28.735 { 00:11:28.735 "subsystem": "iobuf", 00:11:28.735 "config": [ 00:11:28.735 { 00:11:28.735 "method": "iobuf_set_options", 00:11:28.735 "params": { 00:11:28.735 "small_pool_count": 8192, 00:11:28.735 "large_pool_count": 1024, 00:11:28.735 "small_bufsize": 8192, 00:11:28.735 "large_bufsize": 135168 00:11:28.735 } 00:11:28.735 } 00:11:28.735 ] 00:11:28.735 }, 00:11:28.735 { 00:11:28.735 "subsystem": "sock", 00:11:28.735 "config": [ 00:11:28.735 { 00:11:28.735 "method": "sock_impl_set_options", 00:11:28.735 "params": { 00:11:28.735 "impl_name": "uring", 00:11:28.735 "recv_buf_size": 2097152, 00:11:28.735 "send_buf_size": 2097152, 00:11:28.735 "enable_recv_pipe": true, 00:11:28.735 "enable_quickack": false, 00:11:28.735 "enable_placement_id": 0, 00:11:28.735 "enable_zerocopy_send_server": false, 00:11:28.735 "enable_zerocopy_send_client": false, 00:11:28.735 "zerocopy_threshold": 0, 00:11:28.735 "tls_version": 0, 00:11:28.735 "enable_ktls": false 00:11:28.735 } 00:11:28.735 }, 00:11:28.735 { 00:11:28.735 "method": "sock_impl_set_options", 00:11:28.735 "params": { 00:11:28.735 "impl_name": "posix", 00:11:28.735 "recv_buf_size": 2097152, 00:11:28.735 "send_buf_size": 2097152, 00:11:28.735 "enable_recv_pipe": true, 00:11:28.735 "enable_quickack": false, 00:11:28.735 "enable_placement_id": 0, 00:11:28.735 "enable_zerocopy_send_server": true, 00:11:28.735 "enable_zerocopy_send_client": false, 00:11:28.735 "zerocopy_threshold": 0, 00:11:28.735 "tls_version": 0, 00:11:28.735 "enable_ktls": false 00:11:28.735 } 00:11:28.735 }, 00:11:28.735 { 00:11:28.735 "method": "sock_impl_set_options", 00:11:28.735 "params": { 00:11:28.735 "impl_name": "ssl", 00:11:28.736 "recv_buf_size": 4096, 00:11:28.736 "send_buf_size": 4096, 00:11:28.736 "enable_recv_pipe": true, 00:11:28.736 "enable_quickack": false, 00:11:28.736 "enable_placement_id": 0, 00:11:28.736 "enable_zerocopy_send_server": true, 00:11:28.736 "enable_zerocopy_send_client": false, 00:11:28.736 "zerocopy_threshold": 0, 00:11:28.736 "tls_version": 0, 00:11:28.736 "enable_ktls": false 00:11:28.736 } 00:11:28.736 } 00:11:28.736 ] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "vmd", 00:11:28.736 "config": [] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "accel", 00:11:28.736 "config": [ 00:11:28.736 { 00:11:28.736 "method": "accel_set_options", 00:11:28.736 "params": { 00:11:28.736 "small_cache_size": 128, 00:11:28.736 "large_cache_size": 16, 00:11:28.736 "task_count": 2048, 00:11:28.736 "sequence_count": 2048, 00:11:28.736 "buf_count": 2048 00:11:28.736 } 00:11:28.736 } 00:11:28.736 ] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "bdev", 00:11:28.736 "config": [ 00:11:28.736 { 00:11:28.736 "method": "bdev_set_options", 00:11:28.736 "params": { 00:11:28.736 "bdev_io_pool_size": 65535, 00:11:28.736 "bdev_io_cache_size": 256, 00:11:28.736 "bdev_auto_examine": true, 00:11:28.736 "iobuf_small_cache_size": 128, 00:11:28.736 "iobuf_large_cache_size": 16 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_raid_set_options", 00:11:28.736 "params": { 00:11:28.736 "process_window_size_kb": 1024 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_iscsi_set_options", 00:11:28.736 "params": { 00:11:28.736 "timeout_sec": 30 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_nvme_set_options", 00:11:28.736 "params": { 00:11:28.736 "action_on_timeout": "none", 00:11:28.736 "timeout_us": 0, 00:11:28.736 "timeout_admin_us": 0, 00:11:28.736 "keep_alive_timeout_ms": 10000, 00:11:28.736 "transport_retry_count": 4, 00:11:28.736 "arbitration_burst": 0, 00:11:28.736 "low_priority_weight": 0, 00:11:28.736 "medium_priority_weight": 0, 00:11:28.736 "high_priority_weight": 0, 00:11:28.736 "nvme_adminq_poll_period_us": 10000, 00:11:28.736 "nvme_ioq_poll_period_us": 0, 00:11:28.736 "io_queue_requests": 0, 00:11:28.736 "delay_cmd_submit": true, 00:11:28.736 "bdev_retry_count": 3, 00:11:28.736 "transport_ack_timeout": 0, 00:11:28.736 "ctrlr_loss_timeout_sec": 0, 00:11:28.736 "reconnect_delay_sec": 0, 00:11:28.736 "fast_io_fail_timeout_sec": 0, 00:11:28.736 "generate_uuids": false, 00:11:28.736 "transport_tos": 0, 00:11:28.736 "io_path_stat": false, 00:11:28.736 "allow_accel_sequence": false 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_nvme_set_hotplug", 00:11:28.736 "params": { 00:11:28.736 "period_us": 100000, 00:11:28.736 "enable": false 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_malloc_create", 00:11:28.736 "params": { 00:11:28.736 "name": "malloc0", 00:11:28.736 "num_blocks": 8192, 00:11:28.736 "block_size": 4096, 00:11:28.736 "physical_block_size": 4096, 00:11:28.736 "uuid": "6c5f278e-cccf-449f-a6c6-e64a4dd279bd", 00:11:28.736 "optimal_io_boundary": 0 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "bdev_wait_for_examine" 00:11:28.736 } 00:11:28.736 ] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "nbd", 00:11:28.736 "config": [] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "scheduler", 00:11:28.736 "config": [ 00:11:28.736 { 00:11:28.736 "method": "framework_set_scheduler", 00:11:28.736 "params": { 00:11:28.736 "name": "static" 00:11:28.736 } 00:11:28.736 } 00:11:28.736 ] 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "subsystem": "nvmf", 00:11:28.736 "config": [ 00:11:28.736 { 00:11:28.736 "method": "nvmf_set_config", 00:11:28.736 "params": { 00:11:28.736 "discovery_filter": "match_any", 00:11:28.736 "admin_cmd_passthru": { 00:11:28.736 "identify_ctrlr": false 00:11:28.736 } 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_set_max_subsystems", 00:11:28.736 "params": { 00:11:28.736 "max_subsystems": 1024 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_set_crdt", 00:11:28.736 "params": { 00:11:28.736 "crdt1": 0, 00:11:28.736 "crdt2": 0, 00:11:28.736 "crdt3": 0 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_create_transport", 00:11:28.736 "params": { 00:11:28.736 "trtype": "TCP", 00:11:28.736 "max_queue_depth": 128, 00:11:28.736 "max_io_qpairs_per_ctrlr": 127, 00:11:28.736 "in_capsule_data_size": 4096, 00:11:28.736 "max_io_size": 131072, 00:11:28.736 "io_unit_size": 131072, 00:11:28.736 "max_aq_depth": 128, 00:11:28.736 "num_shared_buffers": 511, 00:11:28.736 "buf_cache_size": 4294967295, 00:11:28.736 "dif_insert_or_strip": false, 00:11:28.736 "zcopy": false, 00:11:28.736 "c2h_success": false, 00:11:28.736 "sock_priority": 0, 00:11:28.736 "abort_timeout_sec": 1 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_create_subsystem", 00:11:28.736 "params": { 00:11:28.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.736 "allow_any_host": false, 00:11:28.736 "serial_number": "SPDK00000000000001", 00:11:28.736 "model_number": "SPDK bdev Controller", 00:11:28.736 "max_namespaces": 10, 00:11:28.736 "min_cntlid": 1, 00:11:28.736 "max_cntlid": 65519, 00:11:28.736 "ana_reporting": false 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_subsystem_add_host", 00:11:28.736 "params": { 00:11:28.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.736 "host": "nqn.2016-06.io.spdk:host1", 00:11:28.736 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:28.736 } 00:11:28.736 }, 00:11:28.736 { 00:11:28.736 "method": "nvmf_subsystem_add_ns", 00:11:28.736 "params": { 00:11:28.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.737 "namespace": { 00:11:28.737 "nsid": 1, 00:11:28.737 "bdev_name": "malloc0", 00:11:28.737 "nguid": "6C5F278ECCCF449FA6C6E64A4DD279BD", 00:11:28.737 "uuid": "6c5f278e-cccf-449f-a6c6-e64a4dd279bd" 00:11:28.737 } 00:11:28.737 } 00:11:28.737 }, 00:11:28.737 { 00:11:28.737 "method": "nvmf_subsystem_add_listener", 00:11:28.737 "params": { 00:11:28.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.737 "listen_address": { 00:11:28.737 "trtype": "TCP", 00:11:28.737 "adrfam": "IPv4", 00:11:28.737 "traddr": "10.0.0.2", 00:11:28.737 "trsvcid": "4420" 00:11:28.737 }, 00:11:28.737 "secure_channel": true 00:11:28.737 } 00:11:28.737 } 00:11:28.737 ] 00:11:28.737 } 00:11:28.737 ] 00:11:28.737 }' 00:11:28.737 09:02:05 -- nvmf/common.sh@469 -- # nvmfpid=65498 00:11:28.737 09:02:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:28.737 09:02:05 -- nvmf/common.sh@470 -- # waitforlisten 65498 00:11:28.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.737 09:02:05 -- common/autotest_common.sh@829 -- # '[' -z 65498 ']' 00:11:28.737 09:02:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.737 09:02:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.737 09:02:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.737 09:02:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.737 09:02:05 -- common/autotest_common.sh@10 -- # set +x 00:11:28.737 [2024-11-17 09:02:05.500796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.737 [2024-11-17 09:02:05.501082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.737 [2024-11-17 09:02:05.640090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.004 [2024-11-17 09:02:05.692585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.004 [2024-11-17 09:02:05.693010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.004 [2024-11-17 09:02:05.693147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.004 [2024-11-17 09:02:05.693264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.004 [2024-11-17 09:02:05.693324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.004 [2024-11-17 09:02:05.869408] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.004 [2024-11-17 09:02:05.901368] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:29.004 [2024-11-17 09:02:05.901592] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.571 09:02:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.571 09:02:06 -- common/autotest_common.sh@862 -- # return 0 00:11:29.571 09:02:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:29.571 09:02:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.571 09:02:06 -- common/autotest_common.sh@10 -- # set +x 00:11:29.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.830 09:02:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.830 09:02:06 -- target/tls.sh@216 -- # bdevperf_pid=65530 00:11:29.830 09:02:06 -- target/tls.sh@217 -- # waitforlisten 65530 /var/tmp/bdevperf.sock 00:11:29.830 09:02:06 -- common/autotest_common.sh@829 -- # '[' -z 65530 ']' 00:11:29.831 09:02:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.831 09:02:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.831 09:02:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.831 09:02:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.831 09:02:06 -- common/autotest_common.sh@10 -- # set +x 00:11:29.831 09:02:06 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:29.831 09:02:06 -- target/tls.sh@213 -- # echo '{ 00:11:29.831 "subsystems": [ 00:11:29.831 { 00:11:29.831 "subsystem": "iobuf", 00:11:29.831 "config": [ 00:11:29.831 { 00:11:29.831 "method": "iobuf_set_options", 00:11:29.831 "params": { 00:11:29.831 "small_pool_count": 8192, 00:11:29.831 "large_pool_count": 1024, 00:11:29.831 "small_bufsize": 8192, 00:11:29.831 "large_bufsize": 135168 00:11:29.831 } 00:11:29.831 } 00:11:29.831 ] 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "subsystem": "sock", 00:11:29.831 "config": [ 00:11:29.831 { 00:11:29.831 "method": "sock_impl_set_options", 00:11:29.831 "params": { 00:11:29.831 "impl_name": "uring", 00:11:29.831 "recv_buf_size": 2097152, 00:11:29.831 "send_buf_size": 2097152, 00:11:29.831 "enable_recv_pipe": true, 00:11:29.831 "enable_quickack": false, 00:11:29.831 "enable_placement_id": 0, 00:11:29.831 "enable_zerocopy_send_server": false, 00:11:29.831 "enable_zerocopy_send_client": false, 00:11:29.831 "zerocopy_threshold": 0, 00:11:29.831 "tls_version": 0, 00:11:29.831 "enable_ktls": false 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "sock_impl_set_options", 00:11:29.831 "params": { 00:11:29.831 "impl_name": "posix", 00:11:29.831 "recv_buf_size": 2097152, 00:11:29.831 "send_buf_size": 2097152, 00:11:29.831 "enable_recv_pipe": true, 00:11:29.831 "enable_quickack": false, 00:11:29.831 "enable_placement_id": 0, 00:11:29.831 "enable_zerocopy_send_server": true, 00:11:29.831 "enable_zerocopy_send_client": false, 00:11:29.831 "zerocopy_threshold": 0, 00:11:29.831 "tls_version": 0, 00:11:29.831 "enable_ktls": false 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "sock_impl_set_options", 00:11:29.831 "params": { 00:11:29.831 "impl_name": "ssl", 00:11:29.831 "recv_buf_size": 4096, 00:11:29.831 "send_buf_size": 4096, 00:11:29.831 "enable_recv_pipe": true, 00:11:29.831 "enable_quickack": false, 00:11:29.831 "enable_placement_id": 0, 00:11:29.831 "enable_zerocopy_send_server": true, 00:11:29.831 "enable_zerocopy_send_client": false, 00:11:29.831 "zerocopy_threshold": 0, 00:11:29.831 "tls_version": 0, 00:11:29.831 "enable_ktls": false 00:11:29.831 } 00:11:29.831 } 00:11:29.831 ] 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "subsystem": "vmd", 00:11:29.831 "config": [] 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "subsystem": "accel", 00:11:29.831 "config": [ 00:11:29.831 { 00:11:29.831 "method": "accel_set_options", 00:11:29.831 "params": { 00:11:29.831 "small_cache_size": 128, 00:11:29.831 "large_cache_size": 16, 00:11:29.831 "task_count": 2048, 00:11:29.831 "sequence_count": 2048, 00:11:29.831 "buf_count": 2048 00:11:29.831 } 00:11:29.831 } 00:11:29.831 ] 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "subsystem": "bdev", 00:11:29.831 "config": [ 00:11:29.831 { 00:11:29.831 "method": "bdev_set_options", 00:11:29.831 "params": { 00:11:29.831 "bdev_io_pool_size": 65535, 00:11:29.831 "bdev_io_cache_size": 256, 00:11:29.831 "bdev_auto_examine": true, 00:11:29.831 "iobuf_small_cache_size": 128, 00:11:29.831 "iobuf_large_cache_size": 16 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_raid_set_options", 00:11:29.831 "params": { 00:11:29.831 "process_window_size_kb": 1024 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_iscsi_set_options", 00:11:29.831 "params": { 00:11:29.831 "timeout_sec": 30 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_nvme_set_options", 00:11:29.831 "params": { 00:11:29.831 "action_on_timeout": "none", 00:11:29.831 "timeout_us": 0, 00:11:29.831 "timeout_admin_us": 0, 00:11:29.831 "keep_alive_timeout_ms": 10000, 00:11:29.831 "transport_retry_count": 4, 00:11:29.831 "arbitration_burst": 0, 00:11:29.831 "low_priority_weight": 0, 00:11:29.831 "medium_priority_weight": 0, 00:11:29.831 "high_priority_weight": 0, 00:11:29.831 "nvme_adminq_poll_period_us": 10000, 00:11:29.831 "nvme_ioq_poll_period_us": 0, 00:11:29.831 "io_queue_requests": 512, 00:11:29.831 "delay_cmd_submit": true, 00:11:29.831 "bdev_retry_count": 3, 00:11:29.831 "transport_ack_timeout": 0, 00:11:29.831 "ctrlr_loss_timeout_sec": 0, 00:11:29.831 "reconnect_delay_sec": 0, 00:11:29.831 "fast_io_fail_timeout_sec": 0, 00:11:29.831 "generate_uuids": false, 00:11:29.831 "transport_tos": 0, 00:11:29.831 "io_path_stat": false, 00:11:29.831 "allow_accel_sequence": false 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_nvme_attach_controller", 00:11:29.831 "params": { 00:11:29.831 "name": "TLSTEST", 00:11:29.831 "trtype": "TCP", 00:11:29.831 "adrfam": "IPv4", 00:11:29.831 "traddr": "10.0.0.2", 00:11:29.831 "trsvcid": "4420", 00:11:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.831 "prchk_reftag": false, 00:11:29.831 "prchk_guard": false, 00:11:29.831 "ctrlr_loss_timeout_sec": 0, 00:11:29.831 "reconnect_delay_sec": 0, 00:11:29.831 "fast_io_fail_timeout_sec": 0, 00:11:29.831 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:29.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.831 "hdgst": false, 00:11:29.831 "ddgst": false 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_nvme_set_hotplug", 00:11:29.831 "params": { 00:11:29.831 "period_us": 100000, 00:11:29.831 "enable": false 00:11:29.831 } 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "method": "bdev_wait_for_examine" 00:11:29.831 } 00:11:29.831 ] 00:11:29.831 }, 00:11:29.831 { 00:11:29.831 "subsystem": "nbd", 00:11:29.831 "config": [] 00:11:29.831 } 00:11:29.831 ] 00:11:29.831 }' 00:11:29.831 [2024-11-17 09:02:06.575234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:29.831 [2024-11-17 09:02:06.575335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65530 ] 00:11:29.831 [2024-11-17 09:02:06.713157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.090 [2024-11-17 09:02:06.780462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.090 [2024-11-17 09:02:06.909361] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:31.026 09:02:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.026 09:02:07 -- common/autotest_common.sh@862 -- # return 0 00:11:31.026 09:02:07 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:31.026 Running I/O for 10 seconds... 00:11:41.013 00:11:41.013 Latency(us) 00:11:41.013 [2024-11-17T09:02:17.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.013 [2024-11-17T09:02:17.943Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:41.013 Verification LBA range: start 0x0 length 0x2000 00:11:41.013 TLSTESTn1 : 10.02 6251.35 24.42 0.00 0.00 20438.31 5749.29 25499.46 00:11:41.013 [2024-11-17T09:02:17.943Z] =================================================================================================================== 00:11:41.013 [2024-11-17T09:02:17.943Z] Total : 6251.35 24.42 0.00 0.00 20438.31 5749.29 25499.46 00:11:41.013 0 00:11:41.013 09:02:17 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.013 09:02:17 -- target/tls.sh@223 -- # killprocess 65530 00:11:41.013 09:02:17 -- common/autotest_common.sh@936 -- # '[' -z 65530 ']' 00:11:41.013 09:02:17 -- common/autotest_common.sh@940 -- # kill -0 65530 00:11:41.013 09:02:17 -- common/autotest_common.sh@941 -- # uname 00:11:41.013 09:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.013 09:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65530 00:11:41.013 09:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:41.013 09:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:41.013 killing process with pid 65530 00:11:41.013 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.013 00:11:41.013 Latency(us) 00:11:41.013 [2024-11-17T09:02:17.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.014 [2024-11-17T09:02:17.944Z] =================================================================================================================== 00:11:41.014 [2024-11-17T09:02:17.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.014 09:02:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65530' 00:11:41.014 09:02:17 -- common/autotest_common.sh@955 -- # kill 65530 00:11:41.014 09:02:17 -- common/autotest_common.sh@960 -- # wait 65530 00:11:41.273 09:02:17 -- target/tls.sh@224 -- # killprocess 65498 00:11:41.273 09:02:17 -- common/autotest_common.sh@936 -- # '[' -z 65498 ']' 00:11:41.273 09:02:17 -- common/autotest_common.sh@940 -- # kill -0 65498 00:11:41.273 09:02:17 -- common/autotest_common.sh@941 -- # uname 00:11:41.273 09:02:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.273 09:02:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65498 00:11:41.273 killing process with pid 65498 00:11:41.273 09:02:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:41.273 09:02:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:41.273 09:02:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65498' 00:11:41.273 09:02:18 -- common/autotest_common.sh@955 -- # kill 65498 00:11:41.273 09:02:18 -- common/autotest_common.sh@960 -- # wait 65498 00:11:41.273 09:02:18 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:41.273 09:02:18 -- target/tls.sh@227 -- # cleanup 00:11:41.273 09:02:18 -- target/tls.sh@15 -- # process_shm --id 0 00:11:41.273 09:02:18 -- common/autotest_common.sh@806 -- # type=--id 00:11:41.273 09:02:18 -- common/autotest_common.sh@807 -- # id=0 00:11:41.273 09:02:18 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:41.273 09:02:18 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:41.273 09:02:18 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:41.273 09:02:18 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:41.273 09:02:18 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:41.273 09:02:18 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:41.273 nvmf_trace.0 00:11:41.532 09:02:18 -- common/autotest_common.sh@821 -- # return 0 00:11:41.532 09:02:18 -- target/tls.sh@16 -- # killprocess 65530 00:11:41.532 09:02:18 -- common/autotest_common.sh@936 -- # '[' -z 65530 ']' 00:11:41.532 09:02:18 -- common/autotest_common.sh@940 -- # kill -0 65530 00:11:41.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65530) - No such process 00:11:41.532 Process with pid 65530 is not found 00:11:41.532 09:02:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65530 is not found' 00:11:41.532 09:02:18 -- target/tls.sh@17 -- # nvmftestfini 00:11:41.532 09:02:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:41.532 09:02:18 -- nvmf/common.sh@116 -- # sync 00:11:41.532 09:02:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:41.532 09:02:18 -- nvmf/common.sh@119 -- # set +e 00:11:41.532 09:02:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:41.532 09:02:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:41.532 rmmod nvme_tcp 00:11:41.532 rmmod nvme_fabrics 00:11:41.532 rmmod nvme_keyring 00:11:41.532 09:02:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:41.532 09:02:18 -- nvmf/common.sh@123 -- # set -e 00:11:41.532 09:02:18 -- nvmf/common.sh@124 -- # return 0 00:11:41.532 09:02:18 -- nvmf/common.sh@477 -- # '[' -n 65498 ']' 00:11:41.532 09:02:18 -- nvmf/common.sh@478 -- # killprocess 65498 00:11:41.532 09:02:18 -- common/autotest_common.sh@936 -- # '[' -z 65498 ']' 00:11:41.532 09:02:18 -- common/autotest_common.sh@940 -- # kill -0 65498 00:11:41.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65498) - No such process 00:11:41.532 Process with pid 65498 is not found 00:11:41.532 09:02:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65498 is not found' 00:11:41.532 09:02:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:41.532 09:02:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:41.532 09:02:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:41.532 09:02:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.532 09:02:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:41.532 09:02:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.532 09:02:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.532 09:02:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.532 09:02:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:41.532 09:02:18 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:41.532 ************************************ 00:11:41.532 END TEST nvmf_tls 00:11:41.532 ************************************ 00:11:41.532 00:11:41.532 real 1m10.057s 00:11:41.532 user 1m49.196s 00:11:41.532 sys 0m23.396s 00:11:41.532 09:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.532 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:11:41.532 09:02:18 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:41.532 09:02:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.532 09:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.532 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:11:41.532 ************************************ 00:11:41.532 START TEST nvmf_fips 00:11:41.532 ************************************ 00:11:41.532 09:02:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:41.792 * Looking for test storage... 00:11:41.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:41.792 09:02:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:41.792 09:02:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:41.792 09:02:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:41.792 09:02:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:41.792 09:02:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:41.792 09:02:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:41.792 09:02:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:41.792 09:02:18 -- scripts/common.sh@335 -- # IFS=.-: 00:11:41.792 09:02:18 -- scripts/common.sh@335 -- # read -ra ver1 00:11:41.792 09:02:18 -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.792 09:02:18 -- scripts/common.sh@336 -- # read -ra ver2 00:11:41.792 09:02:18 -- scripts/common.sh@337 -- # local 'op=<' 00:11:41.792 09:02:18 -- scripts/common.sh@339 -- # ver1_l=2 00:11:41.792 09:02:18 -- scripts/common.sh@340 -- # ver2_l=1 00:11:41.792 09:02:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:41.792 09:02:18 -- scripts/common.sh@343 -- # case "$op" in 00:11:41.792 09:02:18 -- scripts/common.sh@344 -- # : 1 00:11:41.792 09:02:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:41.792 09:02:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.792 09:02:18 -- scripts/common.sh@364 -- # decimal 1 00:11:41.792 09:02:18 -- scripts/common.sh@352 -- # local d=1 00:11:41.792 09:02:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.792 09:02:18 -- scripts/common.sh@354 -- # echo 1 00:11:41.792 09:02:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:41.792 09:02:18 -- scripts/common.sh@365 -- # decimal 2 00:11:41.792 09:02:18 -- scripts/common.sh@352 -- # local d=2 00:11:41.792 09:02:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.792 09:02:18 -- scripts/common.sh@354 -- # echo 2 00:11:41.792 09:02:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:41.792 09:02:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.792 09:02:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:41.792 09:02:18 -- scripts/common.sh@367 -- # return 0 00:11:41.792 09:02:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.792 09:02:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.792 --rc genhtml_branch_coverage=1 00:11:41.792 --rc genhtml_function_coverage=1 00:11:41.792 --rc genhtml_legend=1 00:11:41.792 --rc geninfo_all_blocks=1 00:11:41.792 --rc geninfo_unexecuted_blocks=1 00:11:41.792 00:11:41.792 ' 00:11:41.792 09:02:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.792 --rc genhtml_branch_coverage=1 00:11:41.792 --rc genhtml_function_coverage=1 00:11:41.792 --rc genhtml_legend=1 00:11:41.792 --rc geninfo_all_blocks=1 00:11:41.792 --rc geninfo_unexecuted_blocks=1 00:11:41.792 00:11:41.792 ' 00:11:41.792 09:02:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.792 --rc genhtml_branch_coverage=1 00:11:41.792 --rc genhtml_function_coverage=1 00:11:41.792 --rc genhtml_legend=1 00:11:41.792 --rc geninfo_all_blocks=1 00:11:41.792 --rc geninfo_unexecuted_blocks=1 00:11:41.792 00:11:41.792 ' 00:11:41.792 09:02:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:41.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.792 --rc genhtml_branch_coverage=1 00:11:41.792 --rc genhtml_function_coverage=1 00:11:41.792 --rc genhtml_legend=1 00:11:41.792 --rc geninfo_all_blocks=1 00:11:41.792 --rc geninfo_unexecuted_blocks=1 00:11:41.792 00:11:41.792 ' 00:11:41.792 09:02:18 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:41.792 09:02:18 -- nvmf/common.sh@7 -- # uname -s 00:11:41.792 09:02:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.792 09:02:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.792 09:02:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.792 09:02:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.792 09:02:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.792 09:02:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.792 09:02:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.792 09:02:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.792 09:02:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.792 09:02:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.792 09:02:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:41.792 09:02:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:41.792 09:02:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.792 09:02:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.792 09:02:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:41.792 09:02:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.792 09:02:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.792 09:02:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.792 09:02:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.792 09:02:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.792 09:02:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.792 09:02:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.792 09:02:18 -- paths/export.sh@5 -- # export PATH 00:11:41.792 09:02:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.792 09:02:18 -- nvmf/common.sh@46 -- # : 0 00:11:41.792 09:02:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:41.792 09:02:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:41.792 09:02:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:41.792 09:02:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.792 09:02:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.792 09:02:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:41.792 09:02:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:41.792 09:02:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:41.792 09:02:18 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.793 09:02:18 -- fips/fips.sh@89 -- # check_openssl_version 00:11:41.793 09:02:18 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:41.793 09:02:18 -- fips/fips.sh@85 -- # openssl version 00:11:41.793 09:02:18 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:41.793 09:02:18 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:41.793 09:02:18 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:41.793 09:02:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:41.793 09:02:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:41.793 09:02:18 -- scripts/common.sh@335 -- # IFS=.-: 00:11:41.793 09:02:18 -- scripts/common.sh@335 -- # read -ra ver1 00:11:41.793 09:02:18 -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.793 09:02:18 -- scripts/common.sh@336 -- # read -ra ver2 00:11:41.793 09:02:18 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:41.793 09:02:18 -- scripts/common.sh@339 -- # ver1_l=3 00:11:41.793 09:02:18 -- scripts/common.sh@340 -- # ver2_l=3 00:11:41.793 09:02:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:41.793 09:02:18 -- scripts/common.sh@343 -- # case "$op" in 00:11:41.793 09:02:18 -- scripts/common.sh@347 -- # : 1 00:11:41.793 09:02:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:41.793 09:02:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.793 09:02:18 -- scripts/common.sh@364 -- # decimal 3 00:11:41.793 09:02:18 -- scripts/common.sh@352 -- # local d=3 00:11:41.793 09:02:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:41.793 09:02:18 -- scripts/common.sh@354 -- # echo 3 00:11:41.793 09:02:18 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:41.793 09:02:18 -- scripts/common.sh@365 -- # decimal 3 00:11:41.793 09:02:18 -- scripts/common.sh@352 -- # local d=3 00:11:41.793 09:02:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:41.793 09:02:18 -- scripts/common.sh@354 -- # echo 3 00:11:41.793 09:02:18 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:41.793 09:02:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.793 09:02:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:41.793 09:02:18 -- scripts/common.sh@363 -- # (( v++ )) 00:11:41.793 09:02:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.793 09:02:18 -- scripts/common.sh@364 -- # decimal 1 00:11:41.793 09:02:18 -- scripts/common.sh@352 -- # local d=1 00:11:41.793 09:02:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.793 09:02:18 -- scripts/common.sh@354 -- # echo 1 00:11:41.793 09:02:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:41.793 09:02:18 -- scripts/common.sh@365 -- # decimal 0 00:11:41.793 09:02:18 -- scripts/common.sh@352 -- # local d=0 00:11:41.793 09:02:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:41.793 09:02:18 -- scripts/common.sh@354 -- # echo 0 00:11:41.793 09:02:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:41.793 09:02:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.793 09:02:18 -- scripts/common.sh@366 -- # return 0 00:11:41.793 09:02:18 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:41.793 09:02:18 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:41.793 09:02:18 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:41.793 09:02:18 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:41.793 09:02:18 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:41.793 09:02:18 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:41.793 09:02:18 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:41.793 09:02:18 -- fips/fips.sh@113 -- # build_openssl_config 00:11:41.793 09:02:18 -- fips/fips.sh@37 -- # cat 00:11:41.793 09:02:18 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:41.793 09:02:18 -- fips/fips.sh@58 -- # cat - 00:11:41.793 09:02:18 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:41.793 09:02:18 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:41.793 09:02:18 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:41.793 09:02:18 -- fips/fips.sh@116 -- # openssl list -providers 00:11:41.793 09:02:18 -- fips/fips.sh@116 -- # grep name 00:11:42.052 09:02:18 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:42.052 09:02:18 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:42.052 09:02:18 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:42.052 09:02:18 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:42.052 09:02:18 -- fips/fips.sh@127 -- # : 00:11:42.052 09:02:18 -- common/autotest_common.sh@650 -- # local es=0 00:11:42.052 09:02:18 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:42.052 09:02:18 -- common/autotest_common.sh@638 -- # local arg=openssl 00:11:42.052 09:02:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.052 09:02:18 -- common/autotest_common.sh@642 -- # type -t openssl 00:11:42.052 09:02:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.052 09:02:18 -- common/autotest_common.sh@644 -- # type -P openssl 00:11:42.052 09:02:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.052 09:02:18 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:11:42.052 09:02:18 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:11:42.052 09:02:18 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:11:42.052 Error setting digest 00:11:42.052 40329AF9DB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:42.052 40329AF9DB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:42.052 09:02:18 -- common/autotest_common.sh@653 -- # es=1 00:11:42.052 09:02:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.052 09:02:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.052 09:02:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.052 09:02:18 -- fips/fips.sh@130 -- # nvmftestinit 00:11:42.052 09:02:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:42.052 09:02:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.052 09:02:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:42.052 09:02:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:42.052 09:02:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:42.052 09:02:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.052 09:02:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.052 09:02:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.052 09:02:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:42.052 09:02:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:42.052 09:02:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:42.052 09:02:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:42.052 09:02:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:42.052 09:02:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:42.052 09:02:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.052 09:02:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.052 09:02:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.052 09:02:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:42.052 09:02:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.052 09:02:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.052 09:02:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.052 09:02:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.052 09:02:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.052 09:02:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.052 09:02:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.052 09:02:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.052 09:02:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:42.052 09:02:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:42.052 Cannot find device "nvmf_tgt_br" 00:11:42.052 09:02:18 -- nvmf/common.sh@154 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.052 Cannot find device "nvmf_tgt_br2" 00:11:42.052 09:02:18 -- nvmf/common.sh@155 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:42.052 09:02:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:42.052 Cannot find device "nvmf_tgt_br" 00:11:42.052 09:02:18 -- nvmf/common.sh@157 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:42.052 Cannot find device "nvmf_tgt_br2" 00:11:42.052 09:02:18 -- nvmf/common.sh@158 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:42.052 09:02:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:42.052 09:02:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.052 09:02:18 -- nvmf/common.sh@161 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.052 09:02:18 -- nvmf/common.sh@162 -- # true 00:11:42.052 09:02:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.052 09:02:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.052 09:02:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.052 09:02:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.052 09:02:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.052 09:02:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.319 09:02:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.319 09:02:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.319 09:02:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.319 09:02:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:42.319 09:02:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:42.319 09:02:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:42.319 09:02:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:42.319 09:02:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.319 09:02:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.319 09:02:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.319 09:02:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:42.319 09:02:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:42.320 09:02:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.320 09:02:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.320 09:02:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.320 09:02:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.320 09:02:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.320 09:02:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:42.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:11:42.320 00:11:42.320 --- 10.0.0.2 ping statistics --- 00:11:42.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.320 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:42.320 09:02:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:42.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:42.320 00:11:42.320 --- 10.0.0.3 ping statistics --- 00:11:42.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.320 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:42.320 09:02:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:42.320 00:11:42.320 --- 10.0.0.1 ping statistics --- 00:11:42.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.321 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:42.321 09:02:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.321 09:02:19 -- nvmf/common.sh@421 -- # return 0 00:11:42.321 09:02:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.321 09:02:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.321 09:02:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.321 09:02:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.321 09:02:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.321 09:02:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.321 09:02:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.321 09:02:19 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:42.321 09:02:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.321 09:02:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.321 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:11:42.321 09:02:19 -- nvmf/common.sh@469 -- # nvmfpid=65893 00:11:42.321 09:02:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:42.321 09:02:19 -- nvmf/common.sh@470 -- # waitforlisten 65893 00:11:42.321 09:02:19 -- common/autotest_common.sh@829 -- # '[' -z 65893 ']' 00:11:42.321 09:02:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.321 09:02:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.321 09:02:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.321 09:02:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.321 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:11:42.589 [2024-11-17 09:02:19.253805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:42.589 [2024-11-17 09:02:19.253938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.589 [2024-11-17 09:02:19.395153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.589 [2024-11-17 09:02:19.443951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.589 [2024-11-17 09:02:19.444503] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.589 [2024-11-17 09:02:19.444674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.589 [2024-11-17 09:02:19.444793] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.589 [2024-11-17 09:02:19.444906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.536 09:02:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.536 09:02:20 -- common/autotest_common.sh@862 -- # return 0 00:11:43.536 09:02:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.536 09:02:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.536 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.536 09:02:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.536 09:02:20 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:43.536 09:02:20 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:43.536 09:02:20 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:43.536 09:02:20 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:43.536 09:02:20 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:43.536 09:02:20 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:43.536 09:02:20 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:43.536 09:02:20 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.796 [2024-11-17 09:02:20.548105] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.796 [2024-11-17 09:02:20.564072] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:43.796 [2024-11-17 09:02:20.564262] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.796 malloc0 00:11:43.796 09:02:20 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:43.796 09:02:20 -- fips/fips.sh@147 -- # bdevperf_pid=65932 00:11:43.796 09:02:20 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:43.796 09:02:20 -- fips/fips.sh@148 -- # waitforlisten 65932 /var/tmp/bdevperf.sock 00:11:43.796 09:02:20 -- common/autotest_common.sh@829 -- # '[' -z 65932 ']' 00:11:43.796 09:02:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.796 09:02:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.796 09:02:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.796 09:02:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.796 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.796 [2024-11-17 09:02:20.675310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.796 [2024-11-17 09:02:20.675602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65932 ] 00:11:44.055 [2024-11-17 09:02:20.811186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.055 [2024-11-17 09:02:20.878079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.990 09:02:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.991 09:02:21 -- common/autotest_common.sh@862 -- # return 0 00:11:44.991 09:02:21 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:44.991 [2024-11-17 09:02:21.812580] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:44.991 TLSTESTn1 00:11:44.991 09:02:21 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.250 Running I/O for 10 seconds... 00:11:55.233 00:11:55.233 Latency(us) 00:11:55.233 [2024-11-17T09:02:32.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.233 [2024-11-17T09:02:32.163Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:55.233 Verification LBA range: start 0x0 length 0x2000 00:11:55.233 TLSTESTn1 : 10.02 5843.92 22.83 0.00 0.00 21863.22 4230.05 18945.86 00:11:55.233 [2024-11-17T09:02:32.163Z] =================================================================================================================== 00:11:55.233 [2024-11-17T09:02:32.163Z] Total : 5843.92 22.83 0.00 0.00 21863.22 4230.05 18945.86 00:11:55.233 0 00:11:55.233 09:02:32 -- fips/fips.sh@1 -- # cleanup 00:11:55.233 09:02:32 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:55.233 09:02:32 -- common/autotest_common.sh@806 -- # type=--id 00:11:55.233 09:02:32 -- common/autotest_common.sh@807 -- # id=0 00:11:55.233 09:02:32 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:55.233 09:02:32 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:55.233 09:02:32 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:55.233 09:02:32 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:55.233 09:02:32 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:55.233 09:02:32 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:55.233 nvmf_trace.0 00:11:55.233 09:02:32 -- common/autotest_common.sh@821 -- # return 0 00:11:55.233 09:02:32 -- fips/fips.sh@16 -- # killprocess 65932 00:11:55.233 09:02:32 -- common/autotest_common.sh@936 -- # '[' -z 65932 ']' 00:11:55.233 09:02:32 -- common/autotest_common.sh@940 -- # kill -0 65932 00:11:55.233 09:02:32 -- common/autotest_common.sh@941 -- # uname 00:11:55.233 09:02:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.233 09:02:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65932 00:11:55.493 killing process with pid 65932 00:11:55.493 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.493 00:11:55.493 Latency(us) 00:11:55.493 [2024-11-17T09:02:32.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.493 [2024-11-17T09:02:32.423Z] =================================================================================================================== 00:11:55.493 [2024-11-17T09:02:32.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.493 09:02:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:55.493 09:02:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:55.493 09:02:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65932' 00:11:55.493 09:02:32 -- common/autotest_common.sh@955 -- # kill 65932 00:11:55.493 09:02:32 -- common/autotest_common.sh@960 -- # wait 65932 00:11:55.493 09:02:32 -- fips/fips.sh@17 -- # nvmftestfini 00:11:55.493 09:02:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:55.493 09:02:32 -- nvmf/common.sh@116 -- # sync 00:11:55.493 09:02:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:55.493 09:02:32 -- nvmf/common.sh@119 -- # set +e 00:11:55.493 09:02:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:55.493 09:02:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:55.493 rmmod nvme_tcp 00:11:55.752 rmmod nvme_fabrics 00:11:55.752 rmmod nvme_keyring 00:11:55.752 09:02:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:55.752 09:02:32 -- nvmf/common.sh@123 -- # set -e 00:11:55.752 09:02:32 -- nvmf/common.sh@124 -- # return 0 00:11:55.752 09:02:32 -- nvmf/common.sh@477 -- # '[' -n 65893 ']' 00:11:55.752 09:02:32 -- nvmf/common.sh@478 -- # killprocess 65893 00:11:55.752 09:02:32 -- common/autotest_common.sh@936 -- # '[' -z 65893 ']' 00:11:55.752 09:02:32 -- common/autotest_common.sh@940 -- # kill -0 65893 00:11:55.752 09:02:32 -- common/autotest_common.sh@941 -- # uname 00:11:55.752 09:02:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.752 09:02:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65893 00:11:55.752 killing process with pid 65893 00:11:55.752 09:02:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:55.752 09:02:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:55.752 09:02:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65893' 00:11:55.752 09:02:32 -- common/autotest_common.sh@955 -- # kill 65893 00:11:55.752 09:02:32 -- common/autotest_common.sh@960 -- # wait 65893 00:11:55.752 09:02:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:55.752 09:02:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:55.752 09:02:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:55.752 09:02:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.752 09:02:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:55.752 09:02:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.752 09:02:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.752 09:02:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.013 09:02:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:56.013 09:02:32 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:56.013 ************************************ 00:11:56.013 END TEST nvmf_fips 00:11:56.013 ************************************ 00:11:56.013 00:11:56.013 real 0m14.278s 00:11:56.013 user 0m19.492s 00:11:56.013 sys 0m5.551s 00:11:56.013 09:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.013 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:11:56.013 09:02:32 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:56.013 09:02:32 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:56.013 09:02:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:56.013 09:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.013 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:11:56.013 ************************************ 00:11:56.013 START TEST nvmf_fuzz 00:11:56.013 ************************************ 00:11:56.013 09:02:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:56.013 * Looking for test storage... 00:11:56.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.013 09:02:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:56.013 09:02:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:56.013 09:02:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:56.013 09:02:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:56.013 09:02:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:56.013 09:02:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:56.013 09:02:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:56.013 09:02:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:56.013 09:02:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:56.013 09:02:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.013 09:02:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:56.013 09:02:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:56.013 09:02:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:56.013 09:02:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:56.013 09:02:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:56.013 09:02:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:56.013 09:02:32 -- scripts/common.sh@344 -- # : 1 00:11:56.013 09:02:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:56.013 09:02:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.013 09:02:32 -- scripts/common.sh@364 -- # decimal 1 00:11:56.013 09:02:32 -- scripts/common.sh@352 -- # local d=1 00:11:56.013 09:02:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.013 09:02:32 -- scripts/common.sh@354 -- # echo 1 00:11:56.013 09:02:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:56.013 09:02:32 -- scripts/common.sh@365 -- # decimal 2 00:11:56.013 09:02:32 -- scripts/common.sh@352 -- # local d=2 00:11:56.013 09:02:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.013 09:02:32 -- scripts/common.sh@354 -- # echo 2 00:11:56.273 09:02:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:56.273 09:02:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:56.273 09:02:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:56.273 09:02:32 -- scripts/common.sh@367 -- # return 0 00:11:56.273 09:02:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.273 09:02:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.273 --rc genhtml_branch_coverage=1 00:11:56.273 --rc genhtml_function_coverage=1 00:11:56.273 --rc genhtml_legend=1 00:11:56.273 --rc geninfo_all_blocks=1 00:11:56.273 --rc geninfo_unexecuted_blocks=1 00:11:56.273 00:11:56.273 ' 00:11:56.273 09:02:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.273 --rc genhtml_branch_coverage=1 00:11:56.273 --rc genhtml_function_coverage=1 00:11:56.273 --rc genhtml_legend=1 00:11:56.273 --rc geninfo_all_blocks=1 00:11:56.273 --rc geninfo_unexecuted_blocks=1 00:11:56.273 00:11:56.273 ' 00:11:56.273 09:02:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.273 --rc genhtml_branch_coverage=1 00:11:56.273 --rc genhtml_function_coverage=1 00:11:56.273 --rc genhtml_legend=1 00:11:56.273 --rc geninfo_all_blocks=1 00:11:56.273 --rc geninfo_unexecuted_blocks=1 00:11:56.273 00:11:56.273 ' 00:11:56.273 09:02:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:56.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.273 --rc genhtml_branch_coverage=1 00:11:56.273 --rc genhtml_function_coverage=1 00:11:56.273 --rc genhtml_legend=1 00:11:56.273 --rc geninfo_all_blocks=1 00:11:56.273 --rc geninfo_unexecuted_blocks=1 00:11:56.273 00:11:56.273 ' 00:11:56.273 09:02:32 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.273 09:02:32 -- nvmf/common.sh@7 -- # uname -s 00:11:56.273 09:02:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.273 09:02:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.273 09:02:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.273 09:02:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.273 09:02:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.273 09:02:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.273 09:02:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.273 09:02:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.273 09:02:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.273 09:02:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.273 09:02:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:56.273 09:02:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:56.273 09:02:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.273 09:02:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.273 09:02:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.273 09:02:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.273 09:02:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.274 09:02:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.274 09:02:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.274 09:02:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.274 09:02:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.274 09:02:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.274 09:02:32 -- paths/export.sh@5 -- # export PATH 00:11:56.274 09:02:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.274 09:02:32 -- nvmf/common.sh@46 -- # : 0 00:11:56.274 09:02:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:56.274 09:02:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:56.274 09:02:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:56.274 09:02:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.274 09:02:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.274 09:02:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:56.274 09:02:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:56.274 09:02:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:56.274 09:02:32 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:56.274 09:02:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:56.274 09:02:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.274 09:02:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:56.274 09:02:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:56.274 09:02:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:56.274 09:02:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.274 09:02:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.274 09:02:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.274 09:02:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:56.274 09:02:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:56.274 09:02:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:56.274 09:02:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:56.274 09:02:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:56.274 09:02:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:56.274 09:02:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.274 09:02:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.274 09:02:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:56.274 09:02:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:56.274 09:02:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.274 09:02:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.274 09:02:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.274 09:02:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.274 09:02:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.274 09:02:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.274 09:02:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.274 09:02:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.274 09:02:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:56.274 09:02:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:56.274 Cannot find device "nvmf_tgt_br" 00:11:56.274 09:02:33 -- nvmf/common.sh@154 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.274 Cannot find device "nvmf_tgt_br2" 00:11:56.274 09:02:33 -- nvmf/common.sh@155 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:56.274 09:02:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:56.274 Cannot find device "nvmf_tgt_br" 00:11:56.274 09:02:33 -- nvmf/common.sh@157 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:56.274 Cannot find device "nvmf_tgt_br2" 00:11:56.274 09:02:33 -- nvmf/common.sh@158 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:56.274 09:02:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:56.274 09:02:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.274 09:02:33 -- nvmf/common.sh@161 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.274 09:02:33 -- nvmf/common.sh@162 -- # true 00:11:56.274 09:02:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:56.274 09:02:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:56.274 09:02:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:56.274 09:02:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:56.274 09:02:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:56.274 09:02:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:56.274 09:02:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:56.274 09:02:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:56.274 09:02:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:56.274 09:02:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:56.533 09:02:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:56.533 09:02:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:56.533 09:02:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:56.533 09:02:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:56.534 09:02:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:56.534 09:02:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:56.534 09:02:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:56.534 09:02:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:56.534 09:02:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:56.534 09:02:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:56.534 09:02:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:56.534 09:02:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:56.534 09:02:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:56.534 09:02:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:56.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:56.534 00:11:56.534 --- 10.0.0.2 ping statistics --- 00:11:56.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.534 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:56.534 09:02:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:56.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:56.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:56.534 00:11:56.534 --- 10.0.0.3 ping statistics --- 00:11:56.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.534 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:56.534 09:02:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:56.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:56.534 00:11:56.534 --- 10.0.0.1 ping statistics --- 00:11:56.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.534 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:56.534 09:02:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.534 09:02:33 -- nvmf/common.sh@421 -- # return 0 00:11:56.534 09:02:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:56.534 09:02:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.534 09:02:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:56.534 09:02:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:56.534 09:02:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.534 09:02:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:56.534 09:02:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:56.534 09:02:33 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66272 00:11:56.534 09:02:33 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:56.534 09:02:33 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66272 00:11:56.534 09:02:33 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:56.534 09:02:33 -- common/autotest_common.sh@829 -- # '[' -z 66272 ']' 00:11:56.534 09:02:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.534 09:02:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.534 09:02:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.534 09:02:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.534 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:11:57.471 09:02:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.471 09:02:34 -- common/autotest_common.sh@862 -- # return 0 00:11:57.471 09:02:34 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.471 09:02:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.471 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.471 09:02:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.471 09:02:34 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:57.471 09:02:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.471 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.471 Malloc0 00:11:57.471 09:02:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.471 09:02:34 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:57.471 09:02:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.471 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.471 09:02:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.471 09:02:34 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.471 09:02:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.471 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.471 09:02:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.471 09:02:34 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.471 09:02:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.471 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.729 09:02:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.729 09:02:34 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:57.729 09:02:34 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:57.989 Shutting down the fuzz application 00:11:57.989 09:02:34 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:58.248 Shutting down the fuzz application 00:11:58.248 09:02:35 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.248 09:02:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.248 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.248 09:02:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.248 09:02:35 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:58.248 09:02:35 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:58.248 09:02:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:58.248 09:02:35 -- nvmf/common.sh@116 -- # sync 00:11:58.248 09:02:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:58.248 09:02:35 -- nvmf/common.sh@119 -- # set +e 00:11:58.248 09:02:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:58.248 09:02:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:58.248 rmmod nvme_tcp 00:11:58.248 rmmod nvme_fabrics 00:11:58.248 rmmod nvme_keyring 00:11:58.507 09:02:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:58.507 09:02:35 -- nvmf/common.sh@123 -- # set -e 00:11:58.507 09:02:35 -- nvmf/common.sh@124 -- # return 0 00:11:58.507 09:02:35 -- nvmf/common.sh@477 -- # '[' -n 66272 ']' 00:11:58.507 09:02:35 -- nvmf/common.sh@478 -- # killprocess 66272 00:11:58.507 09:02:35 -- common/autotest_common.sh@936 -- # '[' -z 66272 ']' 00:11:58.507 09:02:35 -- common/autotest_common.sh@940 -- # kill -0 66272 00:11:58.507 09:02:35 -- common/autotest_common.sh@941 -- # uname 00:11:58.507 09:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.507 09:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66272 00:11:58.507 killing process with pid 66272 00:11:58.507 09:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.507 09:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.507 09:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66272' 00:11:58.507 09:02:35 -- common/autotest_common.sh@955 -- # kill 66272 00:11:58.507 09:02:35 -- common/autotest_common.sh@960 -- # wait 66272 00:11:58.507 09:02:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:58.507 09:02:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:58.507 09:02:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:58.507 09:02:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.507 09:02:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:58.507 09:02:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.507 09:02:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.508 09:02:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.767 09:02:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:58.767 09:02:35 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:58.767 00:11:58.767 real 0m2.706s 00:11:58.767 user 0m2.877s 00:11:58.767 sys 0m0.572s 00:11:58.767 09:02:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.767 ************************************ 00:11:58.767 END TEST nvmf_fuzz 00:11:58.767 ************************************ 00:11:58.767 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.767 09:02:35 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:58.767 09:02:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.767 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.767 ************************************ 00:11:58.767 START TEST nvmf_multiconnection 00:11:58.767 ************************************ 00:11:58.767 09:02:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:58.767 * Looking for test storage... 00:11:58.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.767 09:02:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:58.767 09:02:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:58.767 09:02:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:58.767 09:02:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:58.767 09:02:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:58.767 09:02:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:58.767 09:02:35 -- scripts/common.sh@335 -- # IFS=.-: 00:11:58.767 09:02:35 -- scripts/common.sh@335 -- # read -ra ver1 00:11:58.767 09:02:35 -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.767 09:02:35 -- scripts/common.sh@336 -- # read -ra ver2 00:11:58.767 09:02:35 -- scripts/common.sh@337 -- # local 'op=<' 00:11:58.767 09:02:35 -- scripts/common.sh@339 -- # ver1_l=2 00:11:58.767 09:02:35 -- scripts/common.sh@340 -- # ver2_l=1 00:11:58.767 09:02:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:58.767 09:02:35 -- scripts/common.sh@343 -- # case "$op" in 00:11:58.767 09:02:35 -- scripts/common.sh@344 -- # : 1 00:11:58.767 09:02:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:58.767 09:02:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.767 09:02:35 -- scripts/common.sh@364 -- # decimal 1 00:11:58.767 09:02:35 -- scripts/common.sh@352 -- # local d=1 00:11:58.767 09:02:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.767 09:02:35 -- scripts/common.sh@354 -- # echo 1 00:11:58.767 09:02:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:58.767 09:02:35 -- scripts/common.sh@365 -- # decimal 2 00:11:58.767 09:02:35 -- scripts/common.sh@352 -- # local d=2 00:11:58.767 09:02:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.767 09:02:35 -- scripts/common.sh@354 -- # echo 2 00:11:58.767 09:02:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:58.767 09:02:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:58.767 09:02:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:58.767 09:02:35 -- scripts/common.sh@367 -- # return 0 00:11:58.767 09:02:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.767 --rc genhtml_branch_coverage=1 00:11:58.767 --rc genhtml_function_coverage=1 00:11:58.767 --rc genhtml_legend=1 00:11:58.767 --rc geninfo_all_blocks=1 00:11:58.767 --rc geninfo_unexecuted_blocks=1 00:11:58.767 00:11:58.767 ' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.767 --rc genhtml_branch_coverage=1 00:11:58.767 --rc genhtml_function_coverage=1 00:11:58.767 --rc genhtml_legend=1 00:11:58.767 --rc geninfo_all_blocks=1 00:11:58.767 --rc geninfo_unexecuted_blocks=1 00:11:58.767 00:11:58.767 ' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.767 --rc genhtml_branch_coverage=1 00:11:58.767 --rc genhtml_function_coverage=1 00:11:58.767 --rc genhtml_legend=1 00:11:58.767 --rc geninfo_all_blocks=1 00:11:58.767 --rc geninfo_unexecuted_blocks=1 00:11:58.767 00:11:58.767 ' 00:11:58.767 09:02:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.767 --rc genhtml_branch_coverage=1 00:11:58.767 --rc genhtml_function_coverage=1 00:11:58.767 --rc genhtml_legend=1 00:11:58.767 --rc geninfo_all_blocks=1 00:11:58.767 --rc geninfo_unexecuted_blocks=1 00:11:58.767 00:11:58.767 ' 00:11:58.767 09:02:35 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.025 09:02:35 -- nvmf/common.sh@7 -- # uname -s 00:11:59.025 09:02:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.025 09:02:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.025 09:02:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.025 09:02:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.025 09:02:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.025 09:02:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.025 09:02:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.025 09:02:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.025 09:02:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.025 09:02:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:59.025 09:02:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:11:59.025 09:02:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.025 09:02:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.025 09:02:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.025 09:02:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.025 09:02:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.025 09:02:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.025 09:02:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.025 09:02:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.025 09:02:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.025 09:02:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.025 09:02:35 -- paths/export.sh@5 -- # export PATH 00:11:59.025 09:02:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.025 09:02:35 -- nvmf/common.sh@46 -- # : 0 00:11:59.025 09:02:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:59.025 09:02:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:59.025 09:02:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:59.025 09:02:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.025 09:02:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.025 09:02:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:59.025 09:02:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:59.025 09:02:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:59.025 09:02:35 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.025 09:02:35 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.025 09:02:35 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:59.025 09:02:35 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:59.025 09:02:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:59.025 09:02:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.025 09:02:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:59.025 09:02:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:59.025 09:02:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:59.025 09:02:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.025 09:02:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.025 09:02:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.025 09:02:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:59.025 09:02:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:59.025 09:02:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.025 09:02:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.025 09:02:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:59.025 09:02:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:59.025 09:02:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.025 09:02:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.025 09:02:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.025 09:02:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.025 09:02:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.025 09:02:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.025 09:02:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.025 09:02:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.025 09:02:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:59.025 09:02:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:59.025 Cannot find device "nvmf_tgt_br" 00:11:59.025 09:02:35 -- nvmf/common.sh@154 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.025 Cannot find device "nvmf_tgt_br2" 00:11:59.025 09:02:35 -- nvmf/common.sh@155 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:59.025 09:02:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:59.025 Cannot find device "nvmf_tgt_br" 00:11:59.025 09:02:35 -- nvmf/common.sh@157 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:59.025 Cannot find device "nvmf_tgt_br2" 00:11:59.025 09:02:35 -- nvmf/common.sh@158 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:59.025 09:02:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:59.025 09:02:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.025 09:02:35 -- nvmf/common.sh@161 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.025 09:02:35 -- nvmf/common.sh@162 -- # true 00:11:59.025 09:02:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.025 09:02:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.025 09:02:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.025 09:02:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.025 09:02:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.025 09:02:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.025 09:02:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.025 09:02:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:59.025 09:02:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:59.284 09:02:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:59.284 09:02:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:59.284 09:02:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:59.284 09:02:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:59.284 09:02:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.284 09:02:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.284 09:02:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.284 09:02:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:59.284 09:02:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:59.284 09:02:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.284 09:02:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.284 09:02:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.284 09:02:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.284 09:02:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.284 09:02:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:59.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:59.284 00:11:59.284 --- 10.0.0.2 ping statistics --- 00:11:59.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.284 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:59.284 09:02:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:59.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:59.284 00:11:59.284 --- 10.0.0.3 ping statistics --- 00:11:59.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.284 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:59.284 09:02:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:59.284 00:11:59.284 --- 10.0.0.1 ping statistics --- 00:11:59.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.284 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:59.284 09:02:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.284 09:02:36 -- nvmf/common.sh@421 -- # return 0 00:11:59.284 09:02:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:59.284 09:02:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.284 09:02:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:59.284 09:02:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:59.284 09:02:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.284 09:02:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:59.284 09:02:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:59.284 09:02:36 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:59.284 09:02:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:59.284 09:02:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.284 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:11:59.284 09:02:36 -- nvmf/common.sh@469 -- # nvmfpid=66466 00:11:59.284 09:02:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.284 09:02:36 -- nvmf/common.sh@470 -- # waitforlisten 66466 00:11:59.284 09:02:36 -- common/autotest_common.sh@829 -- # '[' -z 66466 ']' 00:11:59.284 09:02:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.284 09:02:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.284 09:02:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.284 09:02:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.284 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:11:59.284 [2024-11-17 09:02:36.171980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.284 [2024-11-17 09:02:36.172088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.544 [2024-11-17 09:02:36.312055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.544 [2024-11-17 09:02:36.369058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:59.544 [2024-11-17 09:02:36.369472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.544 [2024-11-17 09:02:36.369622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.544 [2024-11-17 09:02:36.369757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.544 [2024-11-17 09:02:36.370020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.544 [2024-11-17 09:02:36.370101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.544 [2024-11-17 09:02:36.370177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.544 [2024-11-17 09:02:36.370177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.482 09:02:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.482 09:02:37 -- common/autotest_common.sh@862 -- # return 0 00:12:00.482 09:02:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:00.482 09:02:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.482 09:02:37 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 [2024-11-17 09:02:37.241945] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:00.482 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.482 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 Malloc1 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 [2024-11-17 09:02:37.306098] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.482 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 Malloc2 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.482 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 Malloc3 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.482 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.482 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.482 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:00.482 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.482 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.741 Malloc4 00:12:00.741 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.741 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:00.741 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.741 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.741 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.741 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:00.741 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.741 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 Malloc5 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 Malloc6 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 Malloc7 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 Malloc8 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 Malloc9 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:00.742 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.742 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.742 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:00.742 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.742 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 Malloc10 00:12:01.001 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.001 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:01.001 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.001 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.001 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:01.001 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.001 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:01.002 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.002 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:01.002 09:02:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:01.002 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.002 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 Malloc11 00:12:01.002 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:01.002 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.002 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:01.002 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.002 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:01.002 09:02:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.002 09:02:37 -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 09:02:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.002 09:02:37 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:01.002 09:02:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:01.002 09:02:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.002 09:02:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:01.002 09:02:37 -- common/autotest_common.sh@1187 -- # local i=0 00:12:01.002 09:02:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.002 09:02:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:01.002 09:02:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:03.538 09:02:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:03.538 09:02:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:03.538 09:02:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:03.538 09:02:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:03.538 09:02:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.538 09:02:39 -- common/autotest_common.sh@1197 -- # return 0 00:12:03.538 09:02:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:03.538 09:02:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:03.538 09:02:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:03.538 09:02:40 -- common/autotest_common.sh@1187 -- # local i=0 00:12:03.538 09:02:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.538 09:02:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:03.538 09:02:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:05.444 09:02:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:05.444 09:02:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:05.444 09:02:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:05.444 09:02:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:05.444 09:02:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.444 09:02:42 -- common/autotest_common.sh@1197 -- # return 0 00:12:05.444 09:02:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:05.444 09:02:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:05.444 09:02:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:05.444 09:02:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:05.444 09:02:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.444 09:02:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:05.444 09:02:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:07.398 09:02:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:07.398 09:02:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:07.398 09:02:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:07.398 09:02:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:07.398 09:02:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.398 09:02:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:07.398 09:02:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:07.398 09:02:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:07.678 09:02:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:07.678 09:02:44 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.678 09:02:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.678 09:02:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.678 09:02:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:09.584 09:02:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:09.584 09:02:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:09.584 09:02:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:09.584 09:02:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:09.584 09:02:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.584 09:02:46 -- common/autotest_common.sh@1197 -- # return 0 00:12:09.584 09:02:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:09.584 09:02:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:09.843 09:02:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:09.843 09:02:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:09.843 09:02:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.843 09:02:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:09.843 09:02:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:11.802 09:02:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:11.802 09:02:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:11.802 09:02:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:11.802 09:02:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:11.802 09:02:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.802 09:02:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:11.802 09:02:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.802 09:02:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:11.802 09:02:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:11.802 09:02:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:11.802 09:02:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.802 09:02:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:11.802 09:02:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:14.334 09:02:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:14.334 09:02:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:14.335 09:02:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:14.335 09:02:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:14.335 09:02:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.335 09:02:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:14.335 09:02:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:14.335 09:02:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:14.335 09:02:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:14.335 09:02:50 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.335 09:02:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.335 09:02:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.335 09:02:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:16.239 09:02:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:16.239 09:02:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:16.239 09:02:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:16.239 09:02:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:16.239 09:02:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.239 09:02:52 -- common/autotest_common.sh@1197 -- # return 0 00:12:16.239 09:02:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:16.239 09:02:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:16.239 09:02:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:16.239 09:02:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:16.239 09:02:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.239 09:02:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:16.239 09:02:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:18.144 09:02:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:18.145 09:02:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:18.145 09:02:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:18.145 09:02:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:18.145 09:02:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.145 09:02:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:18.145 09:02:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:18.145 09:02:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:18.402 09:02:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:18.402 09:02:55 -- common/autotest_common.sh@1187 -- # local i=0 00:12:18.402 09:02:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.402 09:02:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:18.402 09:02:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:20.306 09:02:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:20.306 09:02:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:20.306 09:02:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:20.306 09:02:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:20.306 09:02:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.306 09:02:57 -- common/autotest_common.sh@1197 -- # return 0 00:12:20.306 09:02:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.306 09:02:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:20.566 09:02:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:20.566 09:02:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:20.566 09:02:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.566 09:02:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:20.566 09:02:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:22.489 09:02:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:22.489 09:02:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:22.489 09:02:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:22.489 09:02:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:22.489 09:02:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.489 09:02:59 -- common/autotest_common.sh@1197 -- # return 0 00:12:22.489 09:02:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:22.489 09:02:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:22.757 09:02:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:22.757 09:02:59 -- common/autotest_common.sh@1187 -- # local i=0 00:12:22.757 09:02:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.757 09:02:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:22.757 09:02:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:24.714 09:03:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:24.714 09:03:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:24.714 09:03:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:24.714 09:03:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:24.714 09:03:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.714 09:03:01 -- common/autotest_common.sh@1197 -- # return 0 00:12:24.714 09:03:01 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:24.714 [global] 00:12:24.714 thread=1 00:12:24.714 invalidate=1 00:12:24.714 rw=read 00:12:24.714 time_based=1 00:12:24.714 runtime=10 00:12:24.714 ioengine=libaio 00:12:24.714 direct=1 00:12:24.714 bs=262144 00:12:24.714 iodepth=64 00:12:24.714 norandommap=1 00:12:24.714 numjobs=1 00:12:24.714 00:12:24.714 [job0] 00:12:24.714 filename=/dev/nvme0n1 00:12:24.714 [job1] 00:12:24.714 filename=/dev/nvme10n1 00:12:24.714 [job2] 00:12:24.714 filename=/dev/nvme1n1 00:12:24.714 [job3] 00:12:24.714 filename=/dev/nvme2n1 00:12:24.714 [job4] 00:12:24.714 filename=/dev/nvme3n1 00:12:24.714 [job5] 00:12:24.714 filename=/dev/nvme4n1 00:12:24.714 [job6] 00:12:24.714 filename=/dev/nvme5n1 00:12:24.714 [job7] 00:12:24.714 filename=/dev/nvme6n1 00:12:24.714 [job8] 00:12:24.714 filename=/dev/nvme7n1 00:12:24.714 [job9] 00:12:24.714 filename=/dev/nvme8n1 00:12:24.714 [job10] 00:12:24.714 filename=/dev/nvme9n1 00:12:24.974 Could not set queue depth (nvme0n1) 00:12:24.974 Could not set queue depth (nvme10n1) 00:12:24.974 Could not set queue depth (nvme1n1) 00:12:24.974 Could not set queue depth (nvme2n1) 00:12:24.974 Could not set queue depth (nvme3n1) 00:12:24.974 Could not set queue depth (nvme4n1) 00:12:24.974 Could not set queue depth (nvme5n1) 00:12:24.974 Could not set queue depth (nvme6n1) 00:12:24.974 Could not set queue depth (nvme7n1) 00:12:24.974 Could not set queue depth (nvme8n1) 00:12:24.974 Could not set queue depth (nvme9n1) 00:12:24.974 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:24.974 fio-3.35 00:12:24.974 Starting 11 threads 00:12:37.187 00:12:37.187 job0: (groupid=0, jobs=1): err= 0: pid=66932: Sun Nov 17 09:03:12 2024 00:12:37.187 read: IOPS=538, BW=135MiB/s (141MB/s)(1358MiB/10096msec) 00:12:37.187 slat (usec): min=21, max=88743, avg=1836.59, stdev=4306.87 00:12:37.187 clat (msec): min=45, max=231, avg=116.96, stdev=12.72 00:12:37.187 lat (msec): min=46, max=232, avg=118.80, stdev=13.12 00:12:37.187 clat percentiles (msec): 00:12:37.187 | 1.00th=[ 82], 5.00th=[ 107], 10.00th=[ 109], 20.00th=[ 111], 00:12:37.187 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 116], 00:12:37.187 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 142], 00:12:37.187 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 209], 99.95th=[ 211], 00:12:37.187 | 99.99th=[ 232] 00:12:37.187 bw ( KiB/s): min=102605, max=143872, per=8.21%, avg=137456.40, stdev=10419.85, samples=20 00:12:37.187 iops : min= 400, max= 562, avg=536.85, stdev=40.83, samples=20 00:12:37.187 lat (msec) : 50=0.11%, 100=1.18%, 250=98.71% 00:12:37.187 cpu : usr=0.26%, sys=2.38%, ctx=1261, majf=0, minf=4097 00:12:37.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.187 issued rwts: total=5433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.187 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.187 job1: (groupid=0, jobs=1): err= 0: pid=66933: Sun Nov 17 09:03:12 2024 00:12:37.187 read: IOPS=526, BW=132MiB/s (138MB/s)(1329MiB/10095msec) 00:12:37.187 slat (usec): min=21, max=125373, avg=1877.90, stdev=4679.54 00:12:37.187 clat (msec): min=83, max=271, avg=119.49, stdev=13.81 00:12:37.187 lat (msec): min=96, max=271, avg=121.37, stdev=14.09 00:12:37.187 clat percentiles (msec): 00:12:37.187 | 1.00th=[ 104], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 112], 00:12:37.187 | 30.00th=[ 114], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 118], 00:12:37.187 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 155], 00:12:37.187 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 199], 99.95th=[ 199], 00:12:37.187 | 99.99th=[ 271] 00:12:37.187 bw ( KiB/s): min=78336, max=143360, per=8.03%, avg=134465.20, stdev=15037.32, samples=20 00:12:37.187 iops : min= 306, max= 560, avg=525.15, stdev=58.72, samples=20 00:12:37.187 lat (msec) : 100=0.28%, 250=99.70%, 500=0.02% 00:12:37.187 cpu : usr=0.31%, sys=2.25%, ctx=1236, majf=0, minf=4097 00:12:37.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.187 issued rwts: total=5316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.187 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.187 job2: (groupid=0, jobs=1): err= 0: pid=66934: Sun Nov 17 09:03:12 2024 00:12:37.187 read: IOPS=572, BW=143MiB/s (150MB/s)(1445MiB/10096msec) 00:12:37.187 slat (usec): min=20, max=68218, avg=1698.90, stdev=3897.56 00:12:37.187 clat (msec): min=9, max=201, avg=109.91, stdev=16.37 00:12:37.187 lat (msec): min=15, max=212, avg=111.61, stdev=16.73 00:12:37.187 clat percentiles (msec): 00:12:37.187 | 1.00th=[ 53], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 105], 00:12:37.187 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 112], 60.00th=[ 114], 00:12:37.187 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 127], 00:12:37.187 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 201], 99.95th=[ 201], 00:12:37.187 | 99.99th=[ 201] 00:12:37.188 bw ( KiB/s): min=107008, max=189952, per=8.74%, avg=146354.75, stdev=17394.36, samples=20 00:12:37.188 iops : min= 418, max= 742, avg=571.65, stdev=67.96, samples=20 00:12:37.188 lat (msec) : 10=0.02%, 20=0.09%, 50=0.73%, 100=15.12%, 250=84.05% 00:12:37.188 cpu : usr=0.30%, sys=2.43%, ctx=1346, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=5780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job3: (groupid=0, jobs=1): err= 0: pid=66935: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=682, BW=171MiB/s (179MB/s)(1711MiB/10027msec) 00:12:37.188 slat (usec): min=18, max=45618, avg=1456.71, stdev=3255.06 00:12:37.188 clat (msec): min=25, max=148, avg=92.16, stdev=10.66 00:12:37.188 lat (msec): min=27, max=148, avg=93.62, stdev=10.73 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 70], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 86], 00:12:37.188 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 93], 00:12:37.188 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 103], 95.00th=[ 114], 00:12:37.188 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 142], 99.95th=[ 142], 00:12:37.188 | 99.99th=[ 150] 00:12:37.188 bw ( KiB/s): min=134144, max=182784, per=10.37%, avg=173656.65, stdev=13437.43, samples=20 00:12:37.188 iops : min= 524, max= 714, avg=678.20, stdev=52.59, samples=20 00:12:37.188 lat (msec) : 50=0.34%, 100=86.62%, 250=13.05% 00:12:37.188 cpu : usr=0.27%, sys=2.56%, ctx=1482, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=6845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job4: (groupid=0, jobs=1): err= 0: pid=66936: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=532, BW=133MiB/s (140MB/s)(1345MiB/10095msec) 00:12:37.188 slat (usec): min=21, max=83621, avg=1854.64, stdev=4414.67 00:12:37.188 clat (msec): min=20, max=221, avg=118.05, stdev=12.18 00:12:37.188 lat (msec): min=21, max=228, avg=119.91, stdev=12.51 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 103], 5.00th=[ 107], 10.00th=[ 109], 20.00th=[ 112], 00:12:37.188 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 117], 00:12:37.188 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 146], 00:12:37.188 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 207], 99.95th=[ 222], 00:12:37.188 | 99.99th=[ 222] 00:12:37.188 bw ( KiB/s): min=99328, max=143360, per=8.13%, avg=136089.70, stdev=11454.04, samples=20 00:12:37.188 iops : min= 388, max= 560, avg=531.55, stdev=44.73, samples=20 00:12:37.188 lat (msec) : 50=0.15%, 100=0.24%, 250=99.61% 00:12:37.188 cpu : usr=0.28%, sys=2.38%, ctx=1262, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=5380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job5: (groupid=0, jobs=1): err= 0: pid=66937: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=548, BW=137MiB/s (144MB/s)(1383MiB/10095msec) 00:12:37.188 slat (usec): min=18, max=69708, avg=1741.15, stdev=3995.03 00:12:37.188 clat (msec): min=20, max=202, avg=114.86, stdev=14.60 00:12:37.188 lat (msec): min=21, max=207, avg=116.60, stdev=14.96 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 48], 5.00th=[ 103], 10.00th=[ 107], 20.00th=[ 110], 00:12:37.188 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 115], 00:12:37.188 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 142], 00:12:37.188 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 190], 99.95th=[ 199], 00:12:37.188 | 99.99th=[ 203] 00:12:37.188 bw ( KiB/s): min=99129, max=167089, per=8.36%, avg=140008.55, stdev=12647.15, samples=20 00:12:37.188 iops : min= 387, max= 652, avg=546.80, stdev=49.37, samples=20 00:12:37.188 lat (msec) : 50=1.01%, 100=2.64%, 250=96.35% 00:12:37.188 cpu : usr=0.21%, sys=2.16%, ctx=1325, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=5533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job6: (groupid=0, jobs=1): err= 0: pid=66938: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=685, BW=171MiB/s (180MB/s)(1720MiB/10028msec) 00:12:37.188 slat (usec): min=18, max=26150, avg=1448.34, stdev=3160.98 00:12:37.188 clat (msec): min=16, max=156, avg=91.74, stdev=10.80 00:12:37.188 lat (msec): min=16, max=157, avg=93.18, stdev=10.90 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 67], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 86], 00:12:37.188 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:12:37.188 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 102], 95.00th=[ 113], 00:12:37.188 | 99.00th=[ 129], 99.50th=[ 134], 99.90th=[ 146], 99.95th=[ 150], 00:12:37.188 | 99.99th=[ 157] 00:12:37.188 bw ( KiB/s): min=134656, max=183808, per=10.42%, avg=174489.40, stdev=12106.41, samples=20 00:12:37.188 iops : min= 526, max= 718, avg=681.55, stdev=47.26, samples=20 00:12:37.188 lat (msec) : 20=0.01%, 50=0.58%, 100=87.83%, 250=11.57% 00:12:37.188 cpu : usr=0.38%, sys=3.02%, ctx=1516, majf=0, minf=4098 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=6879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job7: (groupid=0, jobs=1): err= 0: pid=66939: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=683, BW=171MiB/s (179MB/s)(1713MiB/10027msec) 00:12:37.188 slat (usec): min=17, max=29526, avg=1456.59, stdev=3192.80 00:12:37.188 clat (msec): min=20, max=152, avg=92.09, stdev=10.64 00:12:37.188 lat (msec): min=21, max=152, avg=93.55, stdev=10.68 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 69], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 86], 00:12:37.188 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 93], 00:12:37.188 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 114], 00:12:37.188 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 146], 00:12:37.188 | 99.99th=[ 153] 00:12:37.188 bw ( KiB/s): min=139264, max=184320, per=10.38%, avg=173747.55, stdev=11148.44, samples=20 00:12:37.188 iops : min= 544, max= 720, avg=678.65, stdev=43.52, samples=20 00:12:37.188 lat (msec) : 50=0.60%, 100=85.24%, 250=14.16% 00:12:37.188 cpu : usr=0.33%, sys=2.28%, ctx=1500, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=6851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job8: (groupid=0, jobs=1): err= 0: pid=66940: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=595, BW=149MiB/s (156MB/s)(1502MiB/10092msec) 00:12:37.188 slat (usec): min=17, max=134827, avg=1659.55, stdev=4077.49 00:12:37.188 clat (msec): min=12, max=220, avg=105.72, stdev=32.00 00:12:37.188 lat (msec): min=12, max=270, avg=107.38, stdev=32.48 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 109], 00:12:37.188 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 116], 00:12:37.188 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 125], 95.00th=[ 130], 00:12:37.188 | 99.00th=[ 180], 99.50th=[ 218], 99.90th=[ 220], 99.95th=[ 220], 00:12:37.188 | 99.99th=[ 222] 00:12:37.188 bw ( KiB/s): min=96961, max=440176, per=9.10%, avg=152275.85, stdev=68469.52, samples=20 00:12:37.188 iops : min= 378, max= 1719, avg=594.75, stdev=267.40, samples=20 00:12:37.188 lat (msec) : 20=0.50%, 50=13.67%, 100=0.38%, 250=85.45% 00:12:37.188 cpu : usr=0.35%, sys=1.95%, ctx=1377, majf=0, minf=4097 00:12:37.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:37.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.188 issued rwts: total=6008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.188 job9: (groupid=0, jobs=1): err= 0: pid=66941: Sun Nov 17 09:03:12 2024 00:12:37.188 read: IOPS=594, BW=149MiB/s (156MB/s)(1500MiB/10092msec) 00:12:37.188 slat (usec): min=20, max=26984, avg=1665.40, stdev=3671.43 00:12:37.188 clat (msec): min=16, max=199, avg=105.86, stdev=18.54 00:12:37.188 lat (msec): min=16, max=200, avg=107.53, stdev=18.83 00:12:37.188 clat percentiles (msec): 00:12:37.188 | 1.00th=[ 43], 5.00th=[ 66], 10.00th=[ 80], 20.00th=[ 96], 00:12:37.188 | 30.00th=[ 107], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:12:37.188 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 124], 00:12:37.188 | 99.00th=[ 131], 99.50th=[ 159], 99.90th=[ 197], 99.95th=[ 201], 00:12:37.189 | 99.99th=[ 201] 00:12:37.189 bw ( KiB/s): min=136704, max=238626, per=9.07%, avg=151899.10, stdev=24612.53, samples=20 00:12:37.189 iops : min= 534, max= 932, avg=593.25, stdev=96.13, samples=20 00:12:37.189 lat (msec) : 20=0.10%, 50=1.15%, 100=21.61%, 250=77.14% 00:12:37.189 cpu : usr=0.36%, sys=2.45%, ctx=1354, majf=0, minf=4097 00:12:37.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:37.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.189 issued rwts: total=5998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.189 job10: (groupid=0, jobs=1): err= 0: pid=66942: Sun Nov 17 09:03:12 2024 00:12:37.189 read: IOPS=593, BW=148MiB/s (156MB/s)(1498MiB/10091msec) 00:12:37.189 slat (usec): min=20, max=24870, avg=1663.90, stdev=3573.00 00:12:37.189 clat (msec): min=18, max=198, avg=105.97, stdev=17.86 00:12:37.189 lat (msec): min=18, max=198, avg=107.63, stdev=18.12 00:12:37.189 clat percentiles (msec): 00:12:37.189 | 1.00th=[ 55], 5.00th=[ 66], 10.00th=[ 81], 20.00th=[ 94], 00:12:37.189 | 30.00th=[ 107], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:12:37.189 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 123], 00:12:37.189 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 192], 99.95th=[ 192], 00:12:37.189 | 99.99th=[ 199] 00:12:37.189 bw ( KiB/s): min=137216, max=236032, per=9.07%, avg=151831.00, stdev=23998.66, samples=20 00:12:37.189 iops : min= 536, max= 922, avg=593.05, stdev=93.73, samples=20 00:12:37.189 lat (msec) : 20=0.07%, 50=0.53%, 100=22.35%, 250=77.05% 00:12:37.189 cpu : usr=0.32%, sys=2.68%, ctx=1376, majf=0, minf=4097 00:12:37.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:37.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.189 issued rwts: total=5992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.189 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.189 00:12:37.189 Run status group 0 (all jobs): 00:12:37.189 READ: bw=1635MiB/s (1714MB/s), 132MiB/s-171MiB/s (138MB/s-180MB/s), io=16.1GiB (17.3GB), run=10027-10096msec 00:12:37.189 00:12:37.189 Disk stats (read/write): 00:12:37.189 nvme0n1: ios=10743/0, merge=0/0, ticks=1230642/0, in_queue=1230642, util=97.81% 00:12:37.189 nvme10n1: ios=10504/0, merge=0/0, ticks=1228763/0, in_queue=1228763, util=97.85% 00:12:37.189 nvme1n1: ios=11436/0, merge=0/0, ticks=1230848/0, in_queue=1230848, util=98.06% 00:12:37.189 nvme2n1: ios=13566/0, merge=0/0, ticks=1234749/0, in_queue=1234749, util=98.15% 00:12:37.189 nvme3n1: ios=10635/0, merge=0/0, ticks=1229343/0, in_queue=1229343, util=98.24% 00:12:37.189 nvme4n1: ios=10942/0, merge=0/0, ticks=1232524/0, in_queue=1232524, util=98.40% 00:12:37.189 nvme5n1: ios=13639/0, merge=0/0, ticks=1235761/0, in_queue=1235761, util=98.61% 00:12:37.189 nvme6n1: ios=13581/0, merge=0/0, ticks=1235150/0, in_queue=1235150, util=98.67% 00:12:37.189 nvme7n1: ios=11888/0, merge=0/0, ticks=1230033/0, in_queue=1230033, util=98.88% 00:12:37.189 nvme8n1: ios=11873/0, merge=0/0, ticks=1229484/0, in_queue=1229484, util=99.06% 00:12:37.189 nvme9n1: ios=11859/0, merge=0/0, ticks=1231961/0, in_queue=1231961, util=99.10% 00:12:37.189 09:03:12 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:37.189 [global] 00:12:37.189 thread=1 00:12:37.189 invalidate=1 00:12:37.189 rw=randwrite 00:12:37.189 time_based=1 00:12:37.189 runtime=10 00:12:37.189 ioengine=libaio 00:12:37.189 direct=1 00:12:37.189 bs=262144 00:12:37.189 iodepth=64 00:12:37.189 norandommap=1 00:12:37.189 numjobs=1 00:12:37.189 00:12:37.189 [job0] 00:12:37.189 filename=/dev/nvme0n1 00:12:37.189 [job1] 00:12:37.189 filename=/dev/nvme10n1 00:12:37.189 [job2] 00:12:37.189 filename=/dev/nvme1n1 00:12:37.189 [job3] 00:12:37.189 filename=/dev/nvme2n1 00:12:37.189 [job4] 00:12:37.189 filename=/dev/nvme3n1 00:12:37.189 [job5] 00:12:37.189 filename=/dev/nvme4n1 00:12:37.189 [job6] 00:12:37.189 filename=/dev/nvme5n1 00:12:37.189 [job7] 00:12:37.189 filename=/dev/nvme6n1 00:12:37.189 [job8] 00:12:37.189 filename=/dev/nvme7n1 00:12:37.189 [job9] 00:12:37.189 filename=/dev/nvme8n1 00:12:37.189 [job10] 00:12:37.189 filename=/dev/nvme9n1 00:12:37.189 Could not set queue depth (nvme0n1) 00:12:37.189 Could not set queue depth (nvme10n1) 00:12:37.189 Could not set queue depth (nvme1n1) 00:12:37.189 Could not set queue depth (nvme2n1) 00:12:37.189 Could not set queue depth (nvme3n1) 00:12:37.189 Could not set queue depth (nvme4n1) 00:12:37.189 Could not set queue depth (nvme5n1) 00:12:37.189 Could not set queue depth (nvme6n1) 00:12:37.189 Could not set queue depth (nvme7n1) 00:12:37.189 Could not set queue depth (nvme8n1) 00:12:37.189 Could not set queue depth (nvme9n1) 00:12:37.189 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:37.189 fio-3.35 00:12:37.189 Starting 11 threads 00:12:47.170 00:12:47.170 job0: (groupid=0, jobs=1): err= 0: pid=67137: Sun Nov 17 09:03:23 2024 00:12:47.170 write: IOPS=694, BW=174MiB/s (182MB/s)(1749MiB/10074msec); 0 zone resets 00:12:47.170 slat (usec): min=18, max=58107, avg=1400.43, stdev=2563.16 00:12:47.170 clat (msec): min=9, max=221, avg=90.74, stdev=22.81 00:12:47.170 lat (msec): min=10, max=221, avg=92.14, stdev=23.03 00:12:47.170 clat percentiles (msec): 00:12:47.170 | 1.00th=[ 40], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 84], 00:12:47.170 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 88], 00:12:47.170 | 70.00th=[ 89], 80.00th=[ 89], 90.00th=[ 91], 95.00th=[ 163], 00:12:47.170 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 203], 99.95th=[ 211], 00:12:47.170 | 99.99th=[ 222] 00:12:47.170 bw ( KiB/s): min=82432, max=209920, per=12.12%, avg=177377.40, stdev=32340.98, samples=20 00:12:47.170 iops : min= 322, max= 820, avg=692.80, stdev=126.31, samples=20 00:12:47.170 lat (msec) : 10=0.01%, 20=0.37%, 50=0.94%, 100=92.59%, 250=6.08% 00:12:47.170 cpu : usr=1.00%, sys=2.01%, ctx=8443, majf=0, minf=1 00:12:47.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:47.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.170 issued rwts: total=0,6994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.170 job1: (groupid=0, jobs=1): err= 0: pid=67138: Sun Nov 17 09:03:23 2024 00:12:47.170 write: IOPS=392, BW=98.2MiB/s (103MB/s)(999MiB/10164msec); 0 zone resets 00:12:47.170 slat (usec): min=17, max=34457, avg=2498.29, stdev=4354.31 00:12:47.170 clat (msec): min=35, max=316, avg=160.29, stdev=19.10 00:12:47.170 lat (msec): min=35, max=316, avg=162.79, stdev=18.84 00:12:47.170 clat percentiles (msec): 00:12:47.170 | 1.00th=[ 123], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 153], 00:12:47.170 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.170 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 171], 95.00th=[ 201], 00:12:47.170 | 99.00th=[ 218], 99.50th=[ 268], 99.90th=[ 309], 99.95th=[ 317], 00:12:47.170 | 99.99th=[ 317] 00:12:47.170 bw ( KiB/s): min=80545, max=104448, per=6.88%, avg=100631.20, stdev=6927.46, samples=20 00:12:47.171 iops : min= 314, max= 408, avg=393.05, stdev=27.15, samples=20 00:12:47.171 lat (msec) : 50=0.13%, 100=0.60%, 250=98.62%, 500=0.65% 00:12:47.171 cpu : usr=0.79%, sys=1.15%, ctx=5024, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,3994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job2: (groupid=0, jobs=1): err= 0: pid=67145: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=402, BW=101MiB/s (105MB/s)(1018MiB/10128msec); 0 zone resets 00:12:47.171 slat (usec): min=19, max=49317, avg=2451.61, stdev=4253.64 00:12:47.171 clat (msec): min=55, max=276, avg=156.60, stdev=11.22 00:12:47.171 lat (msec): min=55, max=276, avg=159.06, stdev=10.57 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 130], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:12:47.171 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.171 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:12:47.171 | 99.00th=[ 188], 99.50th=[ 228], 99.90th=[ 266], 99.95th=[ 266], 00:12:47.171 | 99.99th=[ 275] 00:12:47.171 bw ( KiB/s): min=92160, max=106496, per=7.01%, avg=102635.10, stdev=2822.77, samples=20 00:12:47.171 iops : min= 360, max= 416, avg=400.90, stdev=11.02, samples=20 00:12:47.171 lat (msec) : 100=0.59%, 250=99.17%, 500=0.25% 00:12:47.171 cpu : usr=0.64%, sys=1.10%, ctx=5141, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,4073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job3: (groupid=0, jobs=1): err= 0: pid=67151: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=404, BW=101MiB/s (106MB/s)(1024MiB/10137msec); 0 zone resets 00:12:47.171 slat (usec): min=17, max=18044, avg=2437.83, stdev=4187.39 00:12:47.171 clat (msec): min=19, max=284, avg=155.86, stdev=14.63 00:12:47.171 lat (msec): min=19, max=285, avg=158.29, stdev=14.24 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 94], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:12:47.171 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.171 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:12:47.171 | 99.00th=[ 188], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:12:47.171 | 99.99th=[ 284] 00:12:47.171 bw ( KiB/s): min=100864, max=106496, per=7.05%, avg=103234.35, stdev=1333.01, samples=20 00:12:47.171 iops : min= 394, max= 416, avg=403.25, stdev= 5.20, samples=20 00:12:47.171 lat (msec) : 20=0.10%, 50=0.39%, 100=0.59%, 250=98.58%, 500=0.34% 00:12:47.171 cpu : usr=0.65%, sys=1.09%, ctx=4396, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job4: (groupid=0, jobs=1): err= 0: pid=67152: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=393, BW=98.3MiB/s (103MB/s)(1000MiB/10165msec); 0 zone resets 00:12:47.171 slat (usec): min=16, max=53635, avg=2496.70, stdev=4383.40 00:12:47.171 clat (msec): min=16, max=315, avg=160.16, stdev=20.92 00:12:47.171 lat (msec): min=16, max=315, avg=162.65, stdev=20.74 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 91], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 153], 00:12:47.171 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.171 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 205], 00:12:47.171 | 99.00th=[ 222], 99.50th=[ 268], 99.90th=[ 305], 99.95th=[ 317], 00:12:47.171 | 99.99th=[ 317] 00:12:47.171 bw ( KiB/s): min=81920, max=104448, per=6.88%, avg=100725.75, stdev=6650.09, samples=20 00:12:47.171 iops : min= 320, max= 408, avg=393.45, stdev=25.97, samples=20 00:12:47.171 lat (msec) : 20=0.03%, 50=0.50%, 100=0.60%, 250=98.22%, 500=0.65% 00:12:47.171 cpu : usr=0.72%, sys=1.16%, ctx=4569, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,3998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job5: (groupid=0, jobs=1): err= 0: pid=67153: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=769, BW=192MiB/s (202MB/s)(1939MiB/10080msec); 0 zone resets 00:12:47.171 slat (usec): min=18, max=7621, avg=1283.94, stdev=2195.80 00:12:47.171 clat (msec): min=7, max=162, avg=81.85, stdev=12.70 00:12:47.171 lat (msec): min=7, max=162, avg=83.14, stdev=12.71 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 82], 00:12:47.171 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 88], 00:12:47.171 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 90], 95.00th=[ 91], 00:12:47.171 | 99.00th=[ 92], 99.50th=[ 109], 99.90th=[ 153], 99.95th=[ 157], 00:12:47.171 | 99.99th=[ 163] 00:12:47.171 bw ( KiB/s): min=181248, max=290816, per=13.45%, avg=196891.75, stdev=31259.66, samples=20 00:12:47.171 iops : min= 708, max= 1136, avg=769.05, stdev=122.13, samples=20 00:12:47.171 lat (msec) : 10=0.05%, 20=0.15%, 50=0.46%, 100=98.74%, 250=0.59% 00:12:47.171 cpu : usr=1.32%, sys=1.85%, ctx=9848, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,7757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job6: (groupid=0, jobs=1): err= 0: pid=67158: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=390, BW=97.7MiB/s (102MB/s)(993MiB/10163msec); 0 zone resets 00:12:47.171 slat (usec): min=16, max=95142, avg=2513.08, stdev=4575.89 00:12:47.171 clat (msec): min=97, max=311, avg=161.13, stdev=17.51 00:12:47.171 lat (msec): min=97, max=311, avg=163.64, stdev=17.12 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 146], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 153], 00:12:47.171 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.171 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 201], 00:12:47.171 | 99.00th=[ 232], 99.50th=[ 264], 99.90th=[ 300], 99.95th=[ 313], 00:12:47.171 | 99.99th=[ 313] 00:12:47.171 bw ( KiB/s): min=69771, max=104960, per=6.84%, avg=100071.80, stdev=8729.91, samples=20 00:12:47.171 iops : min= 272, max= 410, avg=390.85, stdev=34.19, samples=20 00:12:47.171 lat (msec) : 100=0.10%, 250=99.17%, 500=0.73% 00:12:47.171 cpu : usr=0.68%, sys=1.13%, ctx=1611, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,3973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job7: (groupid=0, jobs=1): err= 0: pid=67159: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=391, BW=97.9MiB/s (103MB/s)(995MiB/10161msec); 0 zone resets 00:12:47.171 slat (usec): min=19, max=78991, avg=2506.71, stdev=4482.88 00:12:47.171 clat (msec): min=22, max=315, avg=160.79, stdev=21.10 00:12:47.171 lat (msec): min=22, max=315, avg=163.30, stdev=20.88 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 144], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 153], 00:12:47.171 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.171 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 178], 95.00th=[ 203], 00:12:47.171 | 99.00th=[ 241], 99.50th=[ 268], 99.90th=[ 305], 99.95th=[ 317], 00:12:47.171 | 99.99th=[ 317] 00:12:47.171 bw ( KiB/s): min=75776, max=104448, per=6.85%, avg=100249.60, stdev=7690.95, samples=20 00:12:47.171 iops : min= 296, max= 408, avg=391.70, stdev=30.09, samples=20 00:12:47.171 lat (msec) : 50=0.50%, 100=0.40%, 250=98.44%, 500=0.65% 00:12:47.171 cpu : usr=0.67%, sys=1.27%, ctx=4885, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,3980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.171 job8: (groupid=0, jobs=1): err= 0: pid=67160: Sun Nov 17 09:03:23 2024 00:12:47.171 write: IOPS=1096, BW=274MiB/s (287MB/s)(2756MiB/10054msec); 0 zone resets 00:12:47.171 slat (usec): min=16, max=7956, avg=901.40, stdev=1500.32 00:12:47.171 clat (msec): min=10, max=107, avg=57.44, stdev= 3.42 00:12:47.171 lat (msec): min=10, max=107, avg=58.34, stdev= 3.30 00:12:47.171 clat percentiles (msec): 00:12:47.171 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 56], 00:12:47.171 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:12:47.171 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 61], 95.00th=[ 61], 00:12:47.171 | 99.00th=[ 63], 99.50th=[ 73], 99.90th=[ 97], 99.95th=[ 104], 00:12:47.171 | 99.99th=[ 108] 00:12:47.171 bw ( KiB/s): min=262156, max=287744, per=19.17%, avg=280596.70, stdev=5337.88, samples=20 00:12:47.171 iops : min= 1024, max= 1124, avg=1096.00, stdev=20.82, samples=20 00:12:47.171 lat (msec) : 20=0.07%, 50=0.22%, 100=99.62%, 250=0.09% 00:12:47.171 cpu : usr=1.76%, sys=3.02%, ctx=15149, majf=0, minf=1 00:12:47.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:47.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.171 issued rwts: total=0,11025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.172 job9: (groupid=0, jobs=1): err= 0: pid=67161: Sun Nov 17 09:03:23 2024 00:12:47.172 write: IOPS=408, BW=102MiB/s (107MB/s)(1034MiB/10137msec); 0 zone resets 00:12:47.172 slat (usec): min=20, max=17130, avg=2394.08, stdev=4166.89 00:12:47.172 clat (msec): min=2, max=294, avg=154.36, stdev=21.15 00:12:47.172 lat (msec): min=4, max=294, avg=156.76, stdev=21.10 00:12:47.172 clat percentiles (msec): 00:12:47.172 | 1.00th=[ 37], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:12:47.172 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.172 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:12:47.172 | 99.00th=[ 194], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:12:47.172 | 99.99th=[ 296] 00:12:47.172 bw ( KiB/s): min=100352, max=129024, per=7.13%, avg=104304.85, stdev=5990.05, samples=20 00:12:47.172 iops : min= 392, max= 504, avg=407.40, stdev=23.40, samples=20 00:12:47.172 lat (msec) : 4=0.02%, 10=0.27%, 20=0.36%, 50=0.80%, 100=1.14% 00:12:47.172 lat (msec) : 250=96.98%, 500=0.44% 00:12:47.172 cpu : usr=0.79%, sys=1.23%, ctx=4548, majf=0, minf=1 00:12:47.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:47.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.172 issued rwts: total=0,4137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.172 job10: (groupid=0, jobs=1): err= 0: pid=67162: Sun Nov 17 09:03:23 2024 00:12:47.172 write: IOPS=402, BW=101MiB/s (106MB/s)(1021MiB/10136msec); 0 zone resets 00:12:47.172 slat (usec): min=16, max=31238, avg=2442.57, stdev=4210.55 00:12:47.172 clat (msec): min=12, max=291, avg=156.34, stdev=15.34 00:12:47.172 lat (msec): min=12, max=291, avg=158.79, stdev=14.98 00:12:47.172 clat percentiles (msec): 00:12:47.172 | 1.00th=[ 95], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:12:47.172 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:47.172 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 165], 00:12:47.172 | 99.00th=[ 192], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:12:47.172 | 99.99th=[ 292] 00:12:47.172 bw ( KiB/s): min=100352, max=106496, per=7.03%, avg=102947.65, stdev=1816.40, samples=20 00:12:47.172 iops : min= 392, max= 416, avg=402.10, stdev= 7.15, samples=20 00:12:47.172 lat (msec) : 20=0.10%, 50=0.39%, 100=0.59%, 250=98.48%, 500=0.44% 00:12:47.172 cpu : usr=0.84%, sys=1.11%, ctx=5371, majf=0, minf=1 00:12:47.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:47.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:47.172 issued rwts: total=0,4084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.172 00:12:47.172 Run status group 0 (all jobs): 00:12:47.172 WRITE: bw=1429MiB/s (1499MB/s), 97.7MiB/s-274MiB/s (102MB/s-287MB/s), io=14.2GiB (15.2GB), run=10054-10165msec 00:12:47.172 00:12:47.172 Disk stats (read/write): 00:12:47.172 nvme0n1: ios=49/13822, merge=0/0, ticks=47/1214565, in_queue=1214612, util=97.72% 00:12:47.172 nvme10n1: ios=49/7847, merge=0/0, ticks=46/1211477, in_queue=1211523, util=97.90% 00:12:47.172 nvme1n1: ios=49/7989, merge=0/0, ticks=31/1209882, in_queue=1209913, util=97.94% 00:12:47.172 nvme2n1: ios=31/8046, merge=0/0, ticks=21/1210976, in_queue=1210997, util=97.94% 00:12:47.172 nvme3n1: ios=32/7854, merge=0/0, ticks=28/1211248, in_queue=1211276, util=98.11% 00:12:47.172 nvme4n1: ios=0/15376, merge=0/0, ticks=0/1216636, in_queue=1216636, util=98.29% 00:12:47.172 nvme5n1: ios=0/7796, merge=0/0, ticks=0/1210887, in_queue=1210887, util=98.23% 00:12:47.172 nvme6n1: ios=0/7825, merge=0/0, ticks=0/1212119, in_queue=1212119, util=98.38% 00:12:47.172 nvme7n1: ios=0/21841, merge=0/0, ticks=0/1213767, in_queue=1213767, util=98.49% 00:12:47.172 nvme8n1: ios=0/8143, merge=0/0, ticks=0/1212531, in_queue=1212531, util=98.84% 00:12:47.172 nvme9n1: ios=0/8033, merge=0/0, ticks=0/1211653, in_queue=1211653, util=98.90% 00:12:47.172 09:03:23 -- target/multiconnection.sh@36 -- # sync 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.172 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:12:47.172 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.172 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:47.172 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.172 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:47.172 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:47.172 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:47.172 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:12:47.173 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.173 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:47.173 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.173 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.173 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.173 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.173 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:47.173 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:47.173 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:47.173 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:12:47.173 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.173 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:47.173 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.173 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.173 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.173 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.173 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:47.173 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:47.173 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:47.173 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:12:47.173 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.173 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:47.173 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.173 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.173 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.173 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.173 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:47.173 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:47.173 09:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:47.173 09:03:23 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:12:47.173 09:03:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.173 09:03:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:12:47.173 09:03:23 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.173 09:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:47.173 09:03:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.173 09:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:47.173 09:03:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.173 09:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.173 09:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:47.432 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:47.432 09:03:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:47.432 09:03:24 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.432 09:03:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:12:47.432 09:03:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.432 09:03:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.432 09:03:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:12:47.432 09:03:24 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.432 09:03:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:47.432 09:03:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.432 09:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:47.432 09:03:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.432 09:03:24 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:47.432 09:03:24 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:47.432 09:03:24 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:47.432 09:03:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:47.432 09:03:24 -- nvmf/common.sh@116 -- # sync 00:12:47.432 09:03:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:47.432 09:03:24 -- nvmf/common.sh@119 -- # set +e 00:12:47.432 09:03:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:47.432 09:03:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:47.432 rmmod nvme_tcp 00:12:47.432 rmmod nvme_fabrics 00:12:47.432 rmmod nvme_keyring 00:12:47.432 09:03:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:47.432 09:03:24 -- nvmf/common.sh@123 -- # set -e 00:12:47.432 09:03:24 -- nvmf/common.sh@124 -- # return 0 00:12:47.432 09:03:24 -- nvmf/common.sh@477 -- # '[' -n 66466 ']' 00:12:47.432 09:03:24 -- nvmf/common.sh@478 -- # killprocess 66466 00:12:47.432 09:03:24 -- common/autotest_common.sh@936 -- # '[' -z 66466 ']' 00:12:47.432 09:03:24 -- common/autotest_common.sh@940 -- # kill -0 66466 00:12:47.432 09:03:24 -- common/autotest_common.sh@941 -- # uname 00:12:47.432 09:03:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.432 09:03:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66466 00:12:47.432 09:03:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:47.432 killing process with pid 66466 00:12:47.432 09:03:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:47.432 09:03:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66466' 00:12:47.432 09:03:24 -- common/autotest_common.sh@955 -- # kill 66466 00:12:47.432 09:03:24 -- common/autotest_common.sh@960 -- # wait 66466 00:12:47.692 09:03:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:47.692 09:03:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:47.692 09:03:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:47.692 09:03:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.692 09:03:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:47.692 09:03:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.692 09:03:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.692 09:03:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.951 09:03:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:47.951 00:12:47.951 real 0m49.118s 00:12:47.951 user 2m40.161s 00:12:47.951 sys 0m35.209s 00:12:47.951 09:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:47.951 ************************************ 00:12:47.951 09:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:47.951 END TEST nvmf_multiconnection 00:12:47.951 ************************************ 00:12:47.951 09:03:24 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:47.951 09:03:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:47.951 09:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:47.951 ************************************ 00:12:47.951 START TEST nvmf_initiator_timeout 00:12:47.951 ************************************ 00:12:47.951 09:03:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:47.951 * Looking for test storage... 00:12:47.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.951 09:03:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:47.951 09:03:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:47.951 09:03:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:47.951 09:03:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:47.951 09:03:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:47.951 09:03:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:47.951 09:03:24 -- scripts/common.sh@335 -- # IFS=.-: 00:12:47.951 09:03:24 -- scripts/common.sh@335 -- # read -ra ver1 00:12:47.951 09:03:24 -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.951 09:03:24 -- scripts/common.sh@336 -- # read -ra ver2 00:12:47.951 09:03:24 -- scripts/common.sh@337 -- # local 'op=<' 00:12:47.951 09:03:24 -- scripts/common.sh@339 -- # ver1_l=2 00:12:47.951 09:03:24 -- scripts/common.sh@340 -- # ver2_l=1 00:12:47.951 09:03:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:47.951 09:03:24 -- scripts/common.sh@343 -- # case "$op" in 00:12:47.951 09:03:24 -- scripts/common.sh@344 -- # : 1 00:12:47.951 09:03:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:47.951 09:03:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.951 09:03:24 -- scripts/common.sh@364 -- # decimal 1 00:12:47.951 09:03:24 -- scripts/common.sh@352 -- # local d=1 00:12:47.951 09:03:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.951 09:03:24 -- scripts/common.sh@354 -- # echo 1 00:12:47.951 09:03:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:47.951 09:03:24 -- scripts/common.sh@365 -- # decimal 2 00:12:47.951 09:03:24 -- scripts/common.sh@352 -- # local d=2 00:12:47.951 09:03:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.951 09:03:24 -- scripts/common.sh@354 -- # echo 2 00:12:47.951 09:03:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:47.951 09:03:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:47.951 09:03:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:47.951 09:03:24 -- scripts/common.sh@367 -- # return 0 00:12:47.951 09:03:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.951 --rc genhtml_branch_coverage=1 00:12:47.951 --rc genhtml_function_coverage=1 00:12:47.951 --rc genhtml_legend=1 00:12:47.951 --rc geninfo_all_blocks=1 00:12:47.951 --rc geninfo_unexecuted_blocks=1 00:12:47.951 00:12:47.951 ' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.951 --rc genhtml_branch_coverage=1 00:12:47.951 --rc genhtml_function_coverage=1 00:12:47.951 --rc genhtml_legend=1 00:12:47.951 --rc geninfo_all_blocks=1 00:12:47.951 --rc geninfo_unexecuted_blocks=1 00:12:47.951 00:12:47.951 ' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.951 --rc genhtml_branch_coverage=1 00:12:47.951 --rc genhtml_function_coverage=1 00:12:47.951 --rc genhtml_legend=1 00:12:47.951 --rc geninfo_all_blocks=1 00:12:47.951 --rc geninfo_unexecuted_blocks=1 00:12:47.951 00:12:47.951 ' 00:12:47.951 09:03:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.951 --rc genhtml_branch_coverage=1 00:12:47.951 --rc genhtml_function_coverage=1 00:12:47.952 --rc genhtml_legend=1 00:12:47.952 --rc geninfo_all_blocks=1 00:12:47.952 --rc geninfo_unexecuted_blocks=1 00:12:47.952 00:12:47.952 ' 00:12:47.952 09:03:24 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.952 09:03:24 -- nvmf/common.sh@7 -- # uname -s 00:12:47.952 09:03:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.952 09:03:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.952 09:03:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.952 09:03:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.952 09:03:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.952 09:03:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.952 09:03:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.952 09:03:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.952 09:03:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.952 09:03:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.211 09:03:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:12:48.211 09:03:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:12:48.211 09:03:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.211 09:03:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.211 09:03:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.211 09:03:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.211 09:03:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.211 09:03:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.211 09:03:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.211 09:03:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.211 09:03:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.211 09:03:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.211 09:03:24 -- paths/export.sh@5 -- # export PATH 00:12:48.211 09:03:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.211 09:03:24 -- nvmf/common.sh@46 -- # : 0 00:12:48.211 09:03:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.211 09:03:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.211 09:03:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.211 09:03:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.211 09:03:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.211 09:03:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.211 09:03:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.211 09:03:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.211 09:03:24 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.211 09:03:24 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.211 09:03:24 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:48.211 09:03:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.211 09:03:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.211 09:03:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.211 09:03:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.211 09:03:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.211 09:03:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.211 09:03:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.211 09:03:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.211 09:03:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:48.211 09:03:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:48.211 09:03:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:48.211 09:03:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:48.211 09:03:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:48.211 09:03:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:48.211 09:03:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.211 09:03:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.211 09:03:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.211 09:03:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:48.211 09:03:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.211 09:03:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.211 09:03:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.211 09:03:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.211 09:03:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.211 09:03:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.211 09:03:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.212 09:03:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.212 09:03:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:48.212 09:03:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:48.212 Cannot find device "nvmf_tgt_br" 00:12:48.212 09:03:24 -- nvmf/common.sh@154 -- # true 00:12:48.212 09:03:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.212 Cannot find device "nvmf_tgt_br2" 00:12:48.212 09:03:24 -- nvmf/common.sh@155 -- # true 00:12:48.212 09:03:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:48.212 09:03:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:48.212 Cannot find device "nvmf_tgt_br" 00:12:48.212 09:03:24 -- nvmf/common.sh@157 -- # true 00:12:48.212 09:03:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:48.212 Cannot find device "nvmf_tgt_br2" 00:12:48.212 09:03:24 -- nvmf/common.sh@158 -- # true 00:12:48.212 09:03:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:48.212 09:03:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:48.212 09:03:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.212 09:03:25 -- nvmf/common.sh@161 -- # true 00:12:48.212 09:03:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.212 09:03:25 -- nvmf/common.sh@162 -- # true 00:12:48.212 09:03:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.212 09:03:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.212 09:03:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.212 09:03:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.212 09:03:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.212 09:03:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.212 09:03:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.212 09:03:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.212 09:03:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.212 09:03:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:48.212 09:03:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:48.212 09:03:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:48.212 09:03:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:48.212 09:03:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.212 09:03:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.471 09:03:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.471 09:03:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:48.471 09:03:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:48.471 09:03:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.471 09:03:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.471 09:03:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.471 09:03:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.471 09:03:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.471 09:03:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:48.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:12:48.471 00:12:48.471 --- 10.0.0.2 ping statistics --- 00:12:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.471 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:48.471 09:03:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:48.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:48.471 00:12:48.471 --- 10.0.0.3 ping statistics --- 00:12:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.471 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:48.471 09:03:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:48.471 00:12:48.471 --- 10.0.0.1 ping statistics --- 00:12:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.471 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:48.471 09:03:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.471 09:03:25 -- nvmf/common.sh@421 -- # return 0 00:12:48.471 09:03:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:48.471 09:03:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.471 09:03:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:48.471 09:03:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:48.471 09:03:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.471 09:03:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:48.471 09:03:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:48.471 09:03:25 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:48.471 09:03:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:48.471 09:03:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.471 09:03:25 -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 09:03:25 -- nvmf/common.sh@469 -- # nvmfpid=67545 00:12:48.471 09:03:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.471 09:03:25 -- nvmf/common.sh@470 -- # waitforlisten 67545 00:12:48.471 09:03:25 -- common/autotest_common.sh@829 -- # '[' -z 67545 ']' 00:12:48.471 09:03:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.471 09:03:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.471 09:03:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.471 09:03:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.471 09:03:25 -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 [2024-11-17 09:03:25.300926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:48.471 [2024-11-17 09:03:25.301022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.730 [2024-11-17 09:03:25.442837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.730 [2024-11-17 09:03:25.513437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.730 [2024-11-17 09:03:25.513624] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.730 [2024-11-17 09:03:25.513643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.730 [2024-11-17 09:03:25.513677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.730 [2024-11-17 09:03:25.513783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.730 [2024-11-17 09:03:25.513880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.730 [2024-11-17 09:03:25.513995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.730 [2024-11-17 09:03:25.514005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.666 09:03:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.666 09:03:26 -- common/autotest_common.sh@862 -- # return 0 00:12:49.666 09:03:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.666 09:03:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 09:03:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 Malloc0 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 Delay0 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 [2024-11-17 09:03:26.396864] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.666 09:03:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.666 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.666 [2024-11-17 09:03:26.425160] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.666 09:03:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.666 09:03:26 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.666 09:03:26 -- common/autotest_common.sh@1187 -- # local i=0 00:12:49.666 09:03:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.666 09:03:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:49.666 09:03:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.199 09:03:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.199 09:03:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.199 09:03:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.199 09:03:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.199 09:03:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.199 09:03:28 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.199 09:03:28 -- target/initiator_timeout.sh@35 -- # fio_pid=67612 00:12:52.199 09:03:28 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:52.199 09:03:28 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:52.199 [global] 00:12:52.199 thread=1 00:12:52.199 invalidate=1 00:12:52.199 rw=write 00:12:52.199 time_based=1 00:12:52.199 runtime=60 00:12:52.199 ioengine=libaio 00:12:52.199 direct=1 00:12:52.199 bs=4096 00:12:52.199 iodepth=1 00:12:52.199 norandommap=0 00:12:52.199 numjobs=1 00:12:52.199 00:12:52.199 verify_dump=1 00:12:52.199 verify_backlog=512 00:12:52.199 verify_state_save=0 00:12:52.199 do_verify=1 00:12:52.199 verify=crc32c-intel 00:12:52.199 [job0] 00:12:52.199 filename=/dev/nvme0n1 00:12:52.199 Could not set queue depth (nvme0n1) 00:12:52.199 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:52.199 fio-3.35 00:12:52.199 Starting 1 thread 00:12:54.732 09:03:31 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:54.732 09:03:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.732 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 true 00:12:54.732 09:03:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.732 09:03:31 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:54.732 09:03:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.732 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 true 00:12:54.732 09:03:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.732 09:03:31 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:54.732 09:03:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.732 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 true 00:12:54.732 09:03:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.732 09:03:31 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:54.732 09:03:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.732 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.732 true 00:12:54.732 09:03:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.732 09:03:31 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:58.021 09:03:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.021 09:03:34 -- common/autotest_common.sh@10 -- # set +x 00:12:58.021 true 00:12:58.021 09:03:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:58.021 09:03:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.021 09:03:34 -- common/autotest_common.sh@10 -- # set +x 00:12:58.021 true 00:12:58.021 09:03:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:58.021 09:03:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.021 09:03:34 -- common/autotest_common.sh@10 -- # set +x 00:12:58.021 true 00:12:58.021 09:03:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:58.021 09:03:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.021 09:03:34 -- common/autotest_common.sh@10 -- # set +x 00:12:58.021 true 00:12:58.021 09:03:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:58.021 09:03:34 -- target/initiator_timeout.sh@54 -- # wait 67612 00:13:54.297 00:13:54.297 job0: (groupid=0, jobs=1): err= 0: pid=67633: Sun Nov 17 09:04:28 2024 00:13:54.297 read: IOPS=756, BW=3028KiB/s (3100kB/s)(177MiB/60000msec) 00:13:54.297 slat (usec): min=10, max=215, avg=14.92, stdev= 7.14 00:13:54.297 clat (usec): min=48, max=14998, avg=217.29, stdev=103.48 00:13:54.297 lat (usec): min=165, max=15015, avg=232.21, stdev=105.24 00:13:54.297 clat percentiles (usec): 00:13:54.297 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 00:13:54.297 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:13:54.297 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 260], 00:13:54.297 | 99.00th=[ 347], 99.50th=[ 619], 99.90th=[ 1074], 99.95th=[ 1336], 00:13:54.297 | 99.99th=[ 3752] 00:13:54.297 write: IOPS=759, BW=3038KiB/s (3111kB/s)(178MiB/60000msec); 0 zone resets 00:13:54.297 slat (usec): min=13, max=9369, avg=22.96, stdev=57.94 00:13:54.297 clat (usec): min=3, max=40576k, avg=1058.72, stdev=190081.90 00:13:54.297 lat (usec): min=131, max=40576k, avg=1081.68, stdev=190081.91 00:13:54.297 clat percentiles (usec): 00:13:54.297 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 147], 00:13:54.297 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:13:54.297 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 210], 00:13:54.297 | 99.00th=[ 265], 99.50th=[ 371], 99.90th=[ 742], 99.95th=[ 922], 00:13:54.297 | 99.99th=[ 3359] 00:13:54.297 bw ( KiB/s): min= 1992, max=12272, per=100.00%, avg=9135.46, stdev=1972.87, samples=39 00:13:54.297 iops : min= 498, max= 3068, avg=2283.82, stdev=493.23, samples=39 00:13:54.297 lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.02%, 250=95.24% 00:13:54.297 lat (usec) : 500=4.24%, 750=0.28%, 1000=0.14% 00:13:54.297 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01%, 20=0.01%, >=2000=0.01% 00:13:54.297 cpu : usr=0.61%, sys=2.23%, ctx=91137, majf=0, minf=5 00:13:54.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.297 issued rwts: total=45414,45568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.297 00:13:54.297 Run status group 0 (all jobs): 00:13:54.297 READ: bw=3028KiB/s (3100kB/s), 3028KiB/s-3028KiB/s (3100kB/s-3100kB/s), io=177MiB (186MB), run=60000-60000msec 00:13:54.297 WRITE: bw=3038KiB/s (3111kB/s), 3038KiB/s-3038KiB/s (3111kB/s-3111kB/s), io=178MiB (187MB), run=60000-60000msec 00:13:54.297 00:13:54.297 Disk stats (read/write): 00:13:54.297 nvme0n1: ios=45328/45470, merge=0/0, ticks=10193/8257, in_queue=18450, util=99.58% 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.297 09:04:28 -- common/autotest_common.sh@1208 -- # local i=0 00:13:54.297 09:04:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:54.297 09:04:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.297 09:04:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:54.297 09:04:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.297 09:04:28 -- common/autotest_common.sh@1220 -- # return 0 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:54.297 nvmf hotplug test: fio successful as expected 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.297 09:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.297 09:04:28 -- common/autotest_common.sh@10 -- # set +x 00:13:54.297 09:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.297 09:04:28 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:54.298 09:04:28 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:54.298 09:04:28 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:54.298 09:04:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:54.298 09:04:28 -- nvmf/common.sh@116 -- # sync 00:13:54.298 09:04:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:54.298 09:04:28 -- nvmf/common.sh@119 -- # set +e 00:13:54.298 09:04:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:54.298 09:04:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:54.298 rmmod nvme_tcp 00:13:54.298 rmmod nvme_fabrics 00:13:54.298 rmmod nvme_keyring 00:13:54.298 09:04:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:54.298 09:04:29 -- nvmf/common.sh@123 -- # set -e 00:13:54.298 09:04:29 -- nvmf/common.sh@124 -- # return 0 00:13:54.298 09:04:29 -- nvmf/common.sh@477 -- # '[' -n 67545 ']' 00:13:54.298 09:04:29 -- nvmf/common.sh@478 -- # killprocess 67545 00:13:54.298 09:04:29 -- common/autotest_common.sh@936 -- # '[' -z 67545 ']' 00:13:54.298 09:04:29 -- common/autotest_common.sh@940 -- # kill -0 67545 00:13:54.298 09:04:29 -- common/autotest_common.sh@941 -- # uname 00:13:54.298 09:04:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:54.298 09:04:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67545 00:13:54.298 09:04:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:54.298 09:04:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:54.298 killing process with pid 67545 00:13:54.298 09:04:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67545' 00:13:54.298 09:04:29 -- common/autotest_common.sh@955 -- # kill 67545 00:13:54.298 09:04:29 -- common/autotest_common.sh@960 -- # wait 67545 00:13:54.298 09:04:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:54.298 09:04:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:54.298 09:04:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:54.298 09:04:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.298 09:04:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:54.298 09:04:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.298 09:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.298 09:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.298 09:04:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:54.298 ************************************ 00:13:54.298 END TEST nvmf_initiator_timeout 00:13:54.298 ************************************ 00:13:54.298 00:13:54.298 real 1m4.570s 00:13:54.298 user 3m47.448s 00:13:54.298 sys 0m23.476s 00:13:54.298 09:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:54.298 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.298 09:04:29 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:54.298 09:04:29 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:54.298 09:04:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.298 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.298 09:04:29 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:54.298 09:04:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.298 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.298 09:04:29 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:54.298 09:04:29 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:54.298 09:04:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.298 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.298 ************************************ 00:13:54.298 START TEST nvmf_identify 00:13:54.298 ************************************ 00:13:54.298 09:04:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:54.298 * Looking for test storage... 00:13:54.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:54.298 09:04:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:54.298 09:04:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:54.298 09:04:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:54.298 09:04:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:54.298 09:04:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:54.298 09:04:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:54.298 09:04:29 -- scripts/common.sh@335 -- # IFS=.-: 00:13:54.298 09:04:29 -- scripts/common.sh@335 -- # read -ra ver1 00:13:54.298 09:04:29 -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.298 09:04:29 -- scripts/common.sh@336 -- # read -ra ver2 00:13:54.298 09:04:29 -- scripts/common.sh@337 -- # local 'op=<' 00:13:54.298 09:04:29 -- scripts/common.sh@339 -- # ver1_l=2 00:13:54.298 09:04:29 -- scripts/common.sh@340 -- # ver2_l=1 00:13:54.298 09:04:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:54.298 09:04:29 -- scripts/common.sh@343 -- # case "$op" in 00:13:54.298 09:04:29 -- scripts/common.sh@344 -- # : 1 00:13:54.298 09:04:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:54.298 09:04:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.298 09:04:29 -- scripts/common.sh@364 -- # decimal 1 00:13:54.298 09:04:29 -- scripts/common.sh@352 -- # local d=1 00:13:54.298 09:04:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.298 09:04:29 -- scripts/common.sh@354 -- # echo 1 00:13:54.298 09:04:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:54.298 09:04:29 -- scripts/common.sh@365 -- # decimal 2 00:13:54.298 09:04:29 -- scripts/common.sh@352 -- # local d=2 00:13:54.298 09:04:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.298 09:04:29 -- scripts/common.sh@354 -- # echo 2 00:13:54.298 09:04:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:54.298 09:04:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:54.298 09:04:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:54.298 09:04:29 -- scripts/common.sh@367 -- # return 0 00:13:54.298 09:04:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:54.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.298 --rc genhtml_branch_coverage=1 00:13:54.298 --rc genhtml_function_coverage=1 00:13:54.298 --rc genhtml_legend=1 00:13:54.298 --rc geninfo_all_blocks=1 00:13:54.298 --rc geninfo_unexecuted_blocks=1 00:13:54.298 00:13:54.298 ' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:54.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.298 --rc genhtml_branch_coverage=1 00:13:54.298 --rc genhtml_function_coverage=1 00:13:54.298 --rc genhtml_legend=1 00:13:54.298 --rc geninfo_all_blocks=1 00:13:54.298 --rc geninfo_unexecuted_blocks=1 00:13:54.298 00:13:54.298 ' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:54.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.298 --rc genhtml_branch_coverage=1 00:13:54.298 --rc genhtml_function_coverage=1 00:13:54.298 --rc genhtml_legend=1 00:13:54.298 --rc geninfo_all_blocks=1 00:13:54.298 --rc geninfo_unexecuted_blocks=1 00:13:54.298 00:13:54.298 ' 00:13:54.298 09:04:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:54.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.298 --rc genhtml_branch_coverage=1 00:13:54.298 --rc genhtml_function_coverage=1 00:13:54.298 --rc genhtml_legend=1 00:13:54.298 --rc geninfo_all_blocks=1 00:13:54.298 --rc geninfo_unexecuted_blocks=1 00:13:54.298 00:13:54.298 ' 00:13:54.298 09:04:29 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.298 09:04:29 -- nvmf/common.sh@7 -- # uname -s 00:13:54.298 09:04:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.298 09:04:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.298 09:04:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.298 09:04:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.298 09:04:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.298 09:04:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.298 09:04:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.298 09:04:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.298 09:04:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.298 09:04:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.298 09:04:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:13:54.298 09:04:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:13:54.298 09:04:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.298 09:04:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.298 09:04:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.298 09:04:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.298 09:04:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.298 09:04:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.298 09:04:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.298 09:04:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.298 09:04:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.298 09:04:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.298 09:04:29 -- paths/export.sh@5 -- # export PATH 00:13:54.299 09:04:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.299 09:04:29 -- nvmf/common.sh@46 -- # : 0 00:13:54.299 09:04:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:54.299 09:04:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:54.299 09:04:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.299 09:04:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.299 09:04:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:54.299 09:04:29 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.299 09:04:29 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.299 09:04:29 -- host/identify.sh@14 -- # nvmftestinit 00:13:54.299 09:04:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.299 09:04:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:54.299 09:04:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:54.299 09:04:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:54.299 09:04:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.299 09:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.299 09:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.299 09:04:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:54.299 09:04:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.299 09:04:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.299 09:04:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.299 09:04:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:54.299 09:04:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.299 09:04:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.299 09:04:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.299 09:04:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.299 09:04:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.299 09:04:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.299 09:04:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.299 09:04:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.299 09:04:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:54.299 09:04:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:54.299 Cannot find device "nvmf_tgt_br" 00:13:54.299 09:04:29 -- nvmf/common.sh@154 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.299 Cannot find device "nvmf_tgt_br2" 00:13:54.299 09:04:29 -- nvmf/common.sh@155 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:54.299 09:04:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:54.299 Cannot find device "nvmf_tgt_br" 00:13:54.299 09:04:29 -- nvmf/common.sh@157 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:54.299 Cannot find device "nvmf_tgt_br2" 00:13:54.299 09:04:29 -- nvmf/common.sh@158 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:54.299 09:04:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:54.299 09:04:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.299 09:04:29 -- nvmf/common.sh@161 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.299 09:04:29 -- nvmf/common.sh@162 -- # true 00:13:54.299 09:04:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.299 09:04:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.299 09:04:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.299 09:04:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.299 09:04:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.299 09:04:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.299 09:04:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.299 09:04:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:54.299 09:04:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:54.299 09:04:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:54.299 09:04:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:54.299 09:04:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:54.299 09:04:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:54.299 09:04:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.299 09:04:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.299 09:04:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.299 09:04:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:54.299 09:04:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:54.299 09:04:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.299 09:04:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.299 09:04:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.299 09:04:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.299 09:04:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.299 09:04:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:54.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:54.299 00:13:54.299 --- 10.0.0.2 ping statistics --- 00:13:54.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.299 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:54.299 09:04:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:54.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:54.299 00:13:54.299 --- 10.0.0.3 ping statistics --- 00:13:54.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.299 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:54.299 09:04:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:54.299 00:13:54.299 --- 10.0.0.1 ping statistics --- 00:13:54.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.299 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:54.299 09:04:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.299 09:04:29 -- nvmf/common.sh@421 -- # return 0 00:13:54.299 09:04:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.299 09:04:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:54.299 09:04:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.299 09:04:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:54.299 09:04:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:54.299 09:04:29 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:54.299 09:04:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.299 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.299 09:04:29 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.299 09:04:29 -- host/identify.sh@19 -- # nvmfpid=68501 00:13:54.299 09:04:29 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:54.299 09:04:29 -- host/identify.sh@23 -- # waitforlisten 68501 00:13:54.299 09:04:29 -- common/autotest_common.sh@829 -- # '[' -z 68501 ']' 00:13:54.299 09:04:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.299 09:04:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.299 09:04:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.299 09:04:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.299 09:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:54.299 [2024-11-17 09:04:30.030286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.299 [2024-11-17 09:04:30.030378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.299 [2024-11-17 09:04:30.161792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.299 [2024-11-17 09:04:30.216649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.299 [2024-11-17 09:04:30.216800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.299 [2024-11-17 09:04:30.216812] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.299 [2024-11-17 09:04:30.216820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.299 [2024-11-17 09:04:30.216976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.299 [2024-11-17 09:04:30.217117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.299 [2024-11-17 09:04:30.217150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.299 [2024-11-17 09:04:30.217152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.299 09:04:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.299 09:04:31 -- common/autotest_common.sh@862 -- # return 0 00:13:54.299 09:04:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.299 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 [2024-11-17 09:04:31.051218] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:54.300 09:04:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 09:04:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 Malloc0 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 [2024-11-17 09:04:31.144858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:54.300 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.300 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.300 [2024-11-17 09:04:31.160681] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:54.300 [ 00:13:54.300 { 00:13:54.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:54.300 "subtype": "Discovery", 00:13:54.300 "listen_addresses": [ 00:13:54.300 { 00:13:54.300 "transport": "TCP", 00:13:54.300 "trtype": "TCP", 00:13:54.300 "adrfam": "IPv4", 00:13:54.300 "traddr": "10.0.0.2", 00:13:54.300 "trsvcid": "4420" 00:13:54.300 } 00:13:54.300 ], 00:13:54.300 "allow_any_host": true, 00:13:54.300 "hosts": [] 00:13:54.300 }, 00:13:54.300 { 00:13:54.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.300 "subtype": "NVMe", 00:13:54.300 "listen_addresses": [ 00:13:54.300 { 00:13:54.300 "transport": "TCP", 00:13:54.300 "trtype": "TCP", 00:13:54.300 "adrfam": "IPv4", 00:13:54.300 "traddr": "10.0.0.2", 00:13:54.300 "trsvcid": "4420" 00:13:54.300 } 00:13:54.300 ], 00:13:54.300 "allow_any_host": true, 00:13:54.300 "hosts": [], 00:13:54.300 "serial_number": "SPDK00000000000001", 00:13:54.300 "model_number": "SPDK bdev Controller", 00:13:54.300 "max_namespaces": 32, 00:13:54.300 "min_cntlid": 1, 00:13:54.300 "max_cntlid": 65519, 00:13:54.300 "namespaces": [ 00:13:54.300 { 00:13:54.300 "nsid": 1, 00:13:54.300 "bdev_name": "Malloc0", 00:13:54.300 "name": "Malloc0", 00:13:54.300 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:54.300 "eui64": "ABCDEF0123456789", 00:13:54.300 "uuid": "b5272449-8cd1-496d-94e4-0e11132a47c6" 00:13:54.300 } 00:13:54.300 ] 00:13:54.300 } 00:13:54.300 ] 00:13:54.300 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.300 09:04:31 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:54.300 [2024-11-17 09:04:31.198171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.300 [2024-11-17 09:04:31.198226] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68536 ] 00:13:54.561 [2024-11-17 09:04:31.337257] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:54.561 [2024-11-17 09:04:31.337340] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:54.561 [2024-11-17 09:04:31.337348] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:54.561 [2024-11-17 09:04:31.337361] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:54.561 [2024-11-17 09:04:31.337375] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:54.561 [2024-11-17 09:04:31.337509] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:54.561 [2024-11-17 09:04:31.337594] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x520d30 0 00:13:54.561 [2024-11-17 09:04:31.341758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:54.561 [2024-11-17 09:04:31.341785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:54.561 [2024-11-17 09:04:31.341808] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:54.561 [2024-11-17 09:04:31.341813] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:54.561 [2024-11-17 09:04:31.341861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.561 [2024-11-17 09:04:31.341870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.561 [2024-11-17 09:04:31.341875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.561 [2024-11-17 09:04:31.341890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:54.561 [2024-11-17 09:04:31.341924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.561 [2024-11-17 09:04:31.348648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.561 [2024-11-17 09:04:31.348670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.561 [2024-11-17 09:04:31.348692] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.561 [2024-11-17 09:04:31.348697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.561 [2024-11-17 09:04:31.348710] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:54.561 [2024-11-17 09:04:31.348719] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:54.562 [2024-11-17 09:04:31.348725] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:54.562 [2024-11-17 09:04:31.348742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.348748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.348752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.348762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.348791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.348844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.348851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.348855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.348859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.348865] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:54.562 [2024-11-17 09:04:31.348873] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:54.562 [2024-11-17 09:04:31.348881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.348885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.348890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.348897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.348916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.349010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.349014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.349026] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:54.562 [2024-11-17 09:04:31.349035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.349060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.349079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.349153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.349157] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.349168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.349198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.349216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.349332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.349336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349341] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.349346] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:54.562 [2024-11-17 09:04:31.349352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349361] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349468] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:54.562 [2024-11-17 09:04:31.349474] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349494] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.349502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.349522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.349587] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.349591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.349602] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:54.562 [2024-11-17 09:04:31.349613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.349631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.349650] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.349741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.349745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.349755] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:54.562 [2024-11-17 09:04:31.349762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:54.562 [2024-11-17 09:04:31.349771] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:54.562 [2024-11-17 09:04:31.349788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:54.562 [2024-11-17 09:04:31.349801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.562 [2024-11-17 09:04:31.349821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.562 [2024-11-17 09:04:31.349846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.562 [2024-11-17 09:04:31.349952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.562 [2024-11-17 09:04:31.349961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.562 [2024-11-17 09:04:31.349965] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349970] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x520d30): datao=0, datal=4096, cccid=0 00:13:54.562 [2024-11-17 09:04:31.349976] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57ef30) on tqpair(0x520d30): expected_datao=0, payload_size=4096 00:13:54.562 [2024-11-17 09:04:31.349986] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.349991] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.350000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.562 [2024-11-17 09:04:31.350007] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.562 [2024-11-17 09:04:31.350011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.562 [2024-11-17 09:04:31.350016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.562 [2024-11-17 09:04:31.350026] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:54.562 [2024-11-17 09:04:31.350032] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:54.562 [2024-11-17 09:04:31.350037] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:54.562 [2024-11-17 09:04:31.350043] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:54.562 [2024-11-17 09:04:31.350048] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:54.563 [2024-11-17 09:04:31.350055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:54.563 [2024-11-17 09:04:31.350069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:54.563 [2024-11-17 09:04:31.350078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.563 [2024-11-17 09:04:31.350118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.563 [2024-11-17 09:04:31.350192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.563 [2024-11-17 09:04:31.350200] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.563 [2024-11-17 09:04:31.350204] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350208] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57ef30) on tqpair=0x520d30 00:13:54.563 [2024-11-17 09:04:31.350217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.563 [2024-11-17 09:04:31.350240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.563 [2024-11-17 09:04:31.350262] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.563 [2024-11-17 09:04:31.350283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.563 [2024-11-17 09:04:31.350304] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:54.563 [2024-11-17 09:04:31.350317] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:54.563 [2024-11-17 09:04:31.350326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.563 [2024-11-17 09:04:31.350363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57ef30, cid 0, qid 0 00:13:54.563 [2024-11-17 09:04:31.350371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f090, cid 1, qid 0 00:13:54.563 [2024-11-17 09:04:31.350377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f1f0, cid 2, qid 0 00:13:54.563 [2024-11-17 09:04:31.350382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.563 [2024-11-17 09:04:31.350387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f4b0, cid 4, qid 0 00:13:54.563 [2024-11-17 09:04:31.350481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.563 [2024-11-17 09:04:31.350488] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.563 [2024-11-17 09:04:31.350492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350497] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f4b0) on tqpair=0x520d30 00:13:54.563 [2024-11-17 09:04:31.350503] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:54.563 [2024-11-17 09:04:31.350509] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:54.563 [2024-11-17 09:04:31.350521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.563 [2024-11-17 09:04:31.350558] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f4b0, cid 4, qid 0 00:13:54.563 [2024-11-17 09:04:31.350632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.563 [2024-11-17 09:04:31.350643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.563 [2024-11-17 09:04:31.350648] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350652] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x520d30): datao=0, datal=4096, cccid=4 00:13:54.563 [2024-11-17 09:04:31.350657] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57f4b0) on tqpair(0x520d30): expected_datao=0, payload_size=4096 00:13:54.563 [2024-11-17 09:04:31.350666] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350670] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.563 [2024-11-17 09:04:31.350686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.563 [2024-11-17 09:04:31.350690] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f4b0) on tqpair=0x520d30 00:13:54.563 [2024-11-17 09:04:31.350708] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:54.563 [2024-11-17 09:04:31.350738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350749] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.563 [2024-11-17 09:04:31.350766] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x520d30) 00:13:54.563 [2024-11-17 09:04:31.350782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.563 [2024-11-17 09:04:31.350810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f4b0, cid 4, qid 0 00:13:54.563 [2024-11-17 09:04:31.350818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f610, cid 5, qid 0 00:13:54.563 [2024-11-17 09:04:31.350922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.563 [2024-11-17 09:04:31.350929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.563 [2024-11-17 09:04:31.350933] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350937] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x520d30): datao=0, datal=1024, cccid=4 00:13:54.563 [2024-11-17 09:04:31.350942] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57f4b0) on tqpair(0x520d30): expected_datao=0, payload_size=1024 00:13:54.563 [2024-11-17 09:04:31.350950] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350955] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.563 [2024-11-17 09:04:31.350961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.563 [2024-11-17 09:04:31.350967] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.563 [2024-11-17 09:04:31.350971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.350976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f610) on tqpair=0x520d30 00:13:54.564 [2024-11-17 09:04:31.350994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.564 [2024-11-17 09:04:31.351003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.564 [2024-11-17 09:04:31.351007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351011] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f4b0) on tqpair=0x520d30 00:13:54.564 [2024-11-17 09:04:31.351028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x520d30) 00:13:54.564 [2024-11-17 09:04:31.351047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.564 [2024-11-17 09:04:31.351073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f4b0, cid 4, qid 0 00:13:54.564 [2024-11-17 09:04:31.351139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.564 [2024-11-17 09:04:31.351147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.564 [2024-11-17 09:04:31.351151] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351155] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x520d30): datao=0, datal=3072, cccid=4 00:13:54.564 [2024-11-17 09:04:31.351161] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57f4b0) on tqpair(0x520d30): expected_datao=0, payload_size=3072 00:13:54.564 [2024-11-17 09:04:31.351169] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351173] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.564 [2024-11-17 09:04:31.351189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.564 [2024-11-17 09:04:31.351192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f4b0) on tqpair=0x520d30 00:13:54.564 [2024-11-17 09:04:31.351208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x520d30) 00:13:54.564 [2024-11-17 09:04:31.351225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.564 [2024-11-17 09:04:31.351249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f4b0, cid 4, qid 0 00:13:54.564 [2024-11-17 09:04:31.351316] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.564 [2024-11-17 09:04:31.351323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.564 [2024-11-17 09:04:31.351327] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351332] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x520d30): datao=0, datal=8, cccid=4 00:13:54.564 [2024-11-17 09:04:31.351337] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x57f4b0) on tqpair(0x520d30): expected_datao=0, payload_size=8 00:13:54.564 [2024-11-17 09:04:31.351345] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351349] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351364] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.564 [2024-11-17 09:04:31.351372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.564 [2024-11-17 09:04:31.351376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.564 [2024-11-17 09:04:31.351380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f4b0) on tqpair=0x520d30 00:13:54.564 ===================================================== 00:13:54.564 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:54.564 ===================================================== 00:13:54.564 Controller Capabilities/Features 00:13:54.564 ================================ 00:13:54.564 Vendor ID: 0000 00:13:54.564 Subsystem Vendor ID: 0000 00:13:54.564 Serial Number: .................... 00:13:54.564 Model Number: ........................................ 00:13:54.564 Firmware Version: 24.01.1 00:13:54.564 Recommended Arb Burst: 0 00:13:54.564 IEEE OUI Identifier: 00 00 00 00:13:54.564 Multi-path I/O 00:13:54.564 May have multiple subsystem ports: No 00:13:54.564 May have multiple controllers: No 00:13:54.564 Associated with SR-IOV VF: No 00:13:54.564 Max Data Transfer Size: 131072 00:13:54.564 Max Number of Namespaces: 0 00:13:54.564 Max Number of I/O Queues: 1024 00:13:54.564 NVMe Specification Version (VS): 1.3 00:13:54.564 NVMe Specification Version (Identify): 1.3 00:13:54.564 Maximum Queue Entries: 128 00:13:54.564 Contiguous Queues Required: Yes 00:13:54.564 Arbitration Mechanisms Supported 00:13:54.564 Weighted Round Robin: Not Supported 00:13:54.564 Vendor Specific: Not Supported 00:13:54.564 Reset Timeout: 15000 ms 00:13:54.564 Doorbell Stride: 4 bytes 00:13:54.564 NVM Subsystem Reset: Not Supported 00:13:54.564 Command Sets Supported 00:13:54.564 NVM Command Set: Supported 00:13:54.564 Boot Partition: Not Supported 00:13:54.564 Memory Page Size Minimum: 4096 bytes 00:13:54.564 Memory Page Size Maximum: 4096 bytes 00:13:54.564 Persistent Memory Region: Not Supported 00:13:54.564 Optional Asynchronous Events Supported 00:13:54.564 Namespace Attribute Notices: Not Supported 00:13:54.564 Firmware Activation Notices: Not Supported 00:13:54.564 ANA Change Notices: Not Supported 00:13:54.564 PLE Aggregate Log Change Notices: Not Supported 00:13:54.564 LBA Status Info Alert Notices: Not Supported 00:13:54.564 EGE Aggregate Log Change Notices: Not Supported 00:13:54.564 Normal NVM Subsystem Shutdown event: Not Supported 00:13:54.564 Zone Descriptor Change Notices: Not Supported 00:13:54.564 Discovery Log Change Notices: Supported 00:13:54.564 Controller Attributes 00:13:54.564 128-bit Host Identifier: Not Supported 00:13:54.564 Non-Operational Permissive Mode: Not Supported 00:13:54.564 NVM Sets: Not Supported 00:13:54.564 Read Recovery Levels: Not Supported 00:13:54.564 Endurance Groups: Not Supported 00:13:54.564 Predictable Latency Mode: Not Supported 00:13:54.564 Traffic Based Keep ALive: Not Supported 00:13:54.564 Namespace Granularity: Not Supported 00:13:54.564 SQ Associations: Not Supported 00:13:54.564 UUID List: Not Supported 00:13:54.564 Multi-Domain Subsystem: Not Supported 00:13:54.564 Fixed Capacity Management: Not Supported 00:13:54.564 Variable Capacity Management: Not Supported 00:13:54.564 Delete Endurance Group: Not Supported 00:13:54.564 Delete NVM Set: Not Supported 00:13:54.564 Extended LBA Formats Supported: Not Supported 00:13:54.564 Flexible Data Placement Supported: Not Supported 00:13:54.564 00:13:54.564 Controller Memory Buffer Support 00:13:54.564 ================================ 00:13:54.564 Supported: No 00:13:54.564 00:13:54.564 Persistent Memory Region Support 00:13:54.564 ================================ 00:13:54.564 Supported: No 00:13:54.564 00:13:54.564 Admin Command Set Attributes 00:13:54.564 ============================ 00:13:54.564 Security Send/Receive: Not Supported 00:13:54.564 Format NVM: Not Supported 00:13:54.564 Firmware Activate/Download: Not Supported 00:13:54.565 Namespace Management: Not Supported 00:13:54.565 Device Self-Test: Not Supported 00:13:54.565 Directives: Not Supported 00:13:54.565 NVMe-MI: Not Supported 00:13:54.565 Virtualization Management: Not Supported 00:13:54.565 Doorbell Buffer Config: Not Supported 00:13:54.565 Get LBA Status Capability: Not Supported 00:13:54.565 Command & Feature Lockdown Capability: Not Supported 00:13:54.565 Abort Command Limit: 1 00:13:54.565 Async Event Request Limit: 4 00:13:54.565 Number of Firmware Slots: N/A 00:13:54.565 Firmware Slot 1 Read-Only: N/A 00:13:54.565 Firmware Activation Without Reset: N/A 00:13:54.565 Multiple Update Detection Support: N/A 00:13:54.565 Firmware Update Granularity: No Information Provided 00:13:54.565 Per-Namespace SMART Log: No 00:13:54.565 Asymmetric Namespace Access Log Page: Not Supported 00:13:54.565 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:54.565 Command Effects Log Page: Not Supported 00:13:54.565 Get Log Page Extended Data: Supported 00:13:54.565 Telemetry Log Pages: Not Supported 00:13:54.565 Persistent Event Log Pages: Not Supported 00:13:54.565 Supported Log Pages Log Page: May Support 00:13:54.565 Commands Supported & Effects Log Page: Not Supported 00:13:54.565 Feature Identifiers & Effects Log Page:May Support 00:13:54.565 NVMe-MI Commands & Effects Log Page: May Support 00:13:54.565 Data Area 4 for Telemetry Log: Not Supported 00:13:54.565 Error Log Page Entries Supported: 128 00:13:54.565 Keep Alive: Not Supported 00:13:54.565 00:13:54.565 NVM Command Set Attributes 00:13:54.565 ========================== 00:13:54.565 Submission Queue Entry Size 00:13:54.565 Max: 1 00:13:54.565 Min: 1 00:13:54.565 Completion Queue Entry Size 00:13:54.565 Max: 1 00:13:54.565 Min: 1 00:13:54.565 Number of Namespaces: 0 00:13:54.565 Compare Command: Not Supported 00:13:54.565 Write Uncorrectable Command: Not Supported 00:13:54.565 Dataset Management Command: Not Supported 00:13:54.565 Write Zeroes Command: Not Supported 00:13:54.565 Set Features Save Field: Not Supported 00:13:54.565 Reservations: Not Supported 00:13:54.565 Timestamp: Not Supported 00:13:54.565 Copy: Not Supported 00:13:54.565 Volatile Write Cache: Not Present 00:13:54.565 Atomic Write Unit (Normal): 1 00:13:54.565 Atomic Write Unit (PFail): 1 00:13:54.565 Atomic Compare & Write Unit: 1 00:13:54.565 Fused Compare & Write: Supported 00:13:54.565 Scatter-Gather List 00:13:54.565 SGL Command Set: Supported 00:13:54.565 SGL Keyed: Supported 00:13:54.565 SGL Bit Bucket Descriptor: Not Supported 00:13:54.565 SGL Metadata Pointer: Not Supported 00:13:54.565 Oversized SGL: Not Supported 00:13:54.565 SGL Metadata Address: Not Supported 00:13:54.565 SGL Offset: Supported 00:13:54.565 Transport SGL Data Block: Not Supported 00:13:54.565 Replay Protected Memory Block: Not Supported 00:13:54.565 00:13:54.565 Firmware Slot Information 00:13:54.565 ========================= 00:13:54.565 Active slot: 0 00:13:54.565 00:13:54.565 00:13:54.565 Error Log 00:13:54.565 ========= 00:13:54.565 00:13:54.565 Active Namespaces 00:13:54.565 ================= 00:13:54.565 Discovery Log Page 00:13:54.565 ================== 00:13:54.565 Generation Counter: 2 00:13:54.565 Number of Records: 2 00:13:54.565 Record Format: 0 00:13:54.565 00:13:54.565 Discovery Log Entry 0 00:13:54.565 ---------------------- 00:13:54.565 Transport Type: 3 (TCP) 00:13:54.565 Address Family: 1 (IPv4) 00:13:54.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:54.565 Entry Flags: 00:13:54.565 Duplicate Returned Information: 1 00:13:54.565 Explicit Persistent Connection Support for Discovery: 1 00:13:54.565 Transport Requirements: 00:13:54.565 Secure Channel: Not Required 00:13:54.565 Port ID: 0 (0x0000) 00:13:54.565 Controller ID: 65535 (0xffff) 00:13:54.565 Admin Max SQ Size: 128 00:13:54.565 Transport Service Identifier: 4420 00:13:54.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:54.565 Transport Address: 10.0.0.2 00:13:54.565 Discovery Log Entry 1 00:13:54.565 ---------------------- 00:13:54.565 Transport Type: 3 (TCP) 00:13:54.565 Address Family: 1 (IPv4) 00:13:54.565 Subsystem Type: 2 (NVM Subsystem) 00:13:54.565 Entry Flags: 00:13:54.565 Duplicate Returned Information: 0 00:13:54.565 Explicit Persistent Connection Support for Discovery: 0 00:13:54.565 Transport Requirements: 00:13:54.565 Secure Channel: Not Required 00:13:54.565 Port ID: 0 (0x0000) 00:13:54.565 Controller ID: 65535 (0xffff) 00:13:54.565 Admin Max SQ Size: 128 00:13:54.565 Transport Service Identifier: 4420 00:13:54.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:54.565 Transport Address: 10.0.0.2 [2024-11-17 09:04:31.351478] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:54.565 [2024-11-17 09:04:31.351495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.565 [2024-11-17 09:04:31.351503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.565 [2024-11-17 09:04:31.351510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.565 [2024-11-17 09:04:31.351517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.565 [2024-11-17 09:04:31.351526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351531] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.565 [2024-11-17 09:04:31.351545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.565 [2024-11-17 09:04:31.351568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.565 [2024-11-17 09:04:31.351640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.565 [2024-11-17 09:04:31.351655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.565 [2024-11-17 09:04:31.351660] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.565 [2024-11-17 09:04:31.351675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.565 [2024-11-17 09:04:31.351693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.565 [2024-11-17 09:04:31.351720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.565 [2024-11-17 09:04:31.351782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.565 [2024-11-17 09:04:31.351789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.565 [2024-11-17 09:04:31.351794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.565 [2024-11-17 09:04:31.351805] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:54.565 [2024-11-17 09:04:31.351810] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:54.565 [2024-11-17 09:04:31.351821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.565 [2024-11-17 09:04:31.351830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.565 [2024-11-17 09:04:31.351838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.565 [2024-11-17 09:04:31.351857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.351903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.351910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.351914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.351918] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.351929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.351934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.351939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.351946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.351964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352038] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352048] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.352073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352126] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.352176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352236] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.352282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.352391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.352507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.352550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.352557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.352561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.352575] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.352584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.352592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.356696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.356727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.356737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.356742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.356746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.356762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.356768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.356772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x520d30) 00:13:54.566 [2024-11-17 09:04:31.356782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.566 [2024-11-17 09:04:31.356809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x57f350, cid 3, qid 0 00:13:54.566 [2024-11-17 09:04:31.356863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.566 [2024-11-17 09:04:31.356870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.566 [2024-11-17 09:04:31.356875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.566 [2024-11-17 09:04:31.356879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x57f350) on tqpair=0x520d30 00:13:54.566 [2024-11-17 09:04:31.356889] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:13:54.566 00:13:54.566 09:04:31 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:54.566 [2024-11-17 09:04:31.395179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.566 [2024-11-17 09:04:31.395231] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68543 ] 00:13:54.830 [2024-11-17 09:04:31.534819] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:54.830 [2024-11-17 09:04:31.534903] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:54.830 [2024-11-17 09:04:31.534910] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:54.830 [2024-11-17 09:04:31.534923] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:54.830 [2024-11-17 09:04:31.534936] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:54.830 [2024-11-17 09:04:31.535067] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:54.830 [2024-11-17 09:04:31.535120] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x219fd30 0 00:13:54.830 [2024-11-17 09:04:31.540639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:54.830 [2024-11-17 09:04:31.540662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:54.830 [2024-11-17 09:04:31.540685] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:54.830 [2024-11-17 09:04:31.540688] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:54.830 [2024-11-17 09:04:31.540730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.540738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.540742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.830 [2024-11-17 09:04:31.540755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:54.830 [2024-11-17 09:04:31.540785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.830 [2024-11-17 09:04:31.548673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.830 [2024-11-17 09:04:31.548694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.830 [2024-11-17 09:04:31.548715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.830 [2024-11-17 09:04:31.548734] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:54.830 [2024-11-17 09:04:31.548742] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:54.830 [2024-11-17 09:04:31.548747] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:54.830 [2024-11-17 09:04:31.548763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.830 [2024-11-17 09:04:31.548780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.830 [2024-11-17 09:04:31.548807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.830 [2024-11-17 09:04:31.548862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.830 [2024-11-17 09:04:31.548869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.830 [2024-11-17 09:04:31.548872] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.830 [2024-11-17 09:04:31.548883] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:54.830 [2024-11-17 09:04:31.548890] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:54.830 [2024-11-17 09:04:31.548898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.830 [2024-11-17 09:04:31.548905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.830 [2024-11-17 09:04:31.548912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.548929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549021] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549027] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:54.831 [2024-11-17 09:04:31.549036] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.549076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.549187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549245] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549251] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:54.831 [2024-11-17 09:04:31.549256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549371] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:54.831 [2024-11-17 09:04:31.549376] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.549417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:54.831 [2024-11-17 09:04:31.549492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549496] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.549524] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549591] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:54.831 [2024-11-17 09:04:31.549596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:54.831 [2024-11-17 09:04:31.549605] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:54.831 [2024-11-17 09:04:31.549620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:54.831 [2024-11-17 09:04:31.549631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549635] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.831 [2024-11-17 09:04:31.549683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.549808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.831 [2024-11-17 09:04:31.549817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.831 [2024-11-17 09:04:31.549821] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549825] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=4096, cccid=0 00:13:54.831 [2024-11-17 09:04:31.549831] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fdf30) on tqpair(0x219fd30): expected_datao=0, payload_size=4096 00:13:54.831 [2024-11-17 09:04:31.549840] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549846] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549855] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.549861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.549865] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549870] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.549880] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:54.831 [2024-11-17 09:04:31.549886] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:54.831 [2024-11-17 09:04:31.549891] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:54.831 [2024-11-17 09:04:31.549895] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:54.831 [2024-11-17 09:04:31.549901] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:54.831 [2024-11-17 09:04:31.549906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:54.831 [2024-11-17 09:04:31.549921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:54.831 [2024-11-17 09:04:31.549930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.549939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.549948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.831 [2024-11-17 09:04:31.549969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.831 [2024-11-17 09:04:31.550034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.831 [2024-11-17 09:04:31.550041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.831 [2024-11-17 09:04:31.550045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fdf30) on tqpair=0x219fd30 00:13:54.831 [2024-11-17 09:04:31.550073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550081] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.550088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.831 [2024-11-17 09:04:31.550094] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.550108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.831 [2024-11-17 09:04:31.550114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.831 [2024-11-17 09:04:31.550121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x219fd30) 00:13:54.831 [2024-11-17 09:04:31.550127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.832 [2024-11-17 09:04:31.550133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.832 [2024-11-17 09:04:31.550152] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550165] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.832 [2024-11-17 09:04:31.550206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fdf30, cid 0, qid 0 00:13:54.832 [2024-11-17 09:04:31.550214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe090, cid 1, qid 0 00:13:54.832 [2024-11-17 09:04:31.550218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe1f0, cid 2, qid 0 00:13:54.832 [2024-11-17 09:04:31.550223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.832 [2024-11-17 09:04:31.550228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.832 [2024-11-17 09:04:31.550314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.832 [2024-11-17 09:04:31.550321] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.832 [2024-11-17 09:04:31.550325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.832 [2024-11-17 09:04:31.550335] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:54.832 [2024-11-17 09:04:31.550341] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550349] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.832 [2024-11-17 09:04:31.550400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.832 [2024-11-17 09:04:31.550445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.832 [2024-11-17 09:04:31.550452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.832 [2024-11-17 09:04:31.550455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.832 [2024-11-17 09:04:31.550520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550531] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550547] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.832 [2024-11-17 09:04:31.550572] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.832 [2024-11-17 09:04:31.550631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.832 [2024-11-17 09:04:31.550650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.832 [2024-11-17 09:04:31.550656] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550659] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=4096, cccid=4 00:13:54.832 [2024-11-17 09:04:31.550664] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe4b0) on tqpair(0x219fd30): expected_datao=0, payload_size=4096 00:13:54.832 [2024-11-17 09:04:31.550672] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550676] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.832 [2024-11-17 09:04:31.550691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.832 [2024-11-17 09:04:31.550695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.832 [2024-11-17 09:04:31.550715] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:54.832 [2024-11-17 09:04:31.550725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.832 [2024-11-17 09:04:31.550780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.832 [2024-11-17 09:04:31.550857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.832 [2024-11-17 09:04:31.550864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.832 [2024-11-17 09:04:31.550867] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550871] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=4096, cccid=4 00:13:54.832 [2024-11-17 09:04:31.550876] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe4b0) on tqpair(0x219fd30): expected_datao=0, payload_size=4096 00:13:54.832 [2024-11-17 09:04:31.550884] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550888] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.832 [2024-11-17 09:04:31.550902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.832 [2024-11-17 09:04:31.550906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.832 [2024-11-17 09:04:31.550925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550937] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.550945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.550953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.832 [2024-11-17 09:04:31.550961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.832 [2024-11-17 09:04:31.550980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.832 [2024-11-17 09:04:31.551040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.832 [2024-11-17 09:04:31.551047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.832 [2024-11-17 09:04:31.551051] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.551055] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=4096, cccid=4 00:13:54.832 [2024-11-17 09:04:31.551059] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe4b0) on tqpair(0x219fd30): expected_datao=0, payload_size=4096 00:13:54.832 [2024-11-17 09:04:31.551067] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.551071] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.551079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.832 [2024-11-17 09:04:31.551085] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.832 [2024-11-17 09:04:31.551089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.551093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.832 [2024-11-17 09:04:31.551103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551111] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551125] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551132] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551142] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:54.832 [2024-11-17 09:04:31.551147] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:54.832 [2024-11-17 09:04:31.551153] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:54.832 [2024-11-17 09:04:31.551169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.832 [2024-11-17 09:04:31.551174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551192] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.833 [2024-11-17 09:04:31.551230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.833 [2024-11-17 09:04:31.551238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe610, cid 5, qid 0 00:13:54.833 [2024-11-17 09:04:31.551296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.551302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.551306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.551318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.551324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.551327] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551331] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe610) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.551342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551347] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551375] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe610, cid 5, qid 0 00:13:54.833 [2024-11-17 09:04:31.551424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.551431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.551434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe610) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.551450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe610, cid 5, qid 0 00:13:54.833 [2024-11-17 09:04:31.551530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.551536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.551540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551544] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe610) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.551555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe610, cid 5, qid 0 00:13:54.833 [2024-11-17 09:04:31.551662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.551671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.551675] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe610) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.551694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551703] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551740] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x219fd30) 00:13:54.833 [2024-11-17 09:04:31.551777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.833 [2024-11-17 09:04:31.551799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe610, cid 5, qid 0 00:13:54.833 [2024-11-17 09:04:31.551806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe4b0, cid 4, qid 0 00:13:54.833 [2024-11-17 09:04:31.551811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe770, cid 6, qid 0 00:13:54.833 [2024-11-17 09:04:31.551816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe8d0, cid 7, qid 0 00:13:54.833 [2024-11-17 09:04:31.551943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.833 [2024-11-17 09:04:31.551950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.833 [2024-11-17 09:04:31.551954] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551957] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=8192, cccid=5 00:13:54.833 [2024-11-17 09:04:31.551962] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe610) on tqpair(0x219fd30): expected_datao=0, payload_size=8192 00:13:54.833 [2024-11-17 09:04:31.551982] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.551988] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.833 [2024-11-17 09:04:31.552015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.833 [2024-11-17 09:04:31.552018] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552022] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=512, cccid=4 00:13:54.833 [2024-11-17 09:04:31.552027] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe4b0) on tqpair(0x219fd30): expected_datao=0, payload_size=512 00:13:54.833 [2024-11-17 09:04:31.552034] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552038] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552044] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.833 [2024-11-17 09:04:31.552049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.833 [2024-11-17 09:04:31.552053] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552056] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=512, cccid=6 00:13:54.833 [2024-11-17 09:04:31.552061] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe770) on tqpair(0x219fd30): expected_datao=0, payload_size=512 00:13:54.833 [2024-11-17 09:04:31.552068] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552072] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:54.833 [2024-11-17 09:04:31.552083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:54.833 [2024-11-17 09:04:31.552087] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552090] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x219fd30): datao=0, datal=4096, cccid=7 00:13:54.833 [2024-11-17 09:04:31.552095] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fe8d0) on tqpair(0x219fd30): expected_datao=0, payload_size=4096 00:13:54.833 [2024-11-17 09:04:31.552102] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.552119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.552123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552127] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe610) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.552144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.552151] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.552154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe4b0) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.552169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.833 [2024-11-17 09:04:31.552175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.833 [2024-11-17 09:04:31.552179] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.833 [2024-11-17 09:04:31.552183] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe770) on tqpair=0x219fd30 00:13:54.833 [2024-11-17 09:04:31.552191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.834 [2024-11-17 09:04:31.552197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.834 [2024-11-17 09:04:31.552200] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.834 [2024-11-17 09:04:31.552204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe8d0) on tqpair=0x219fd30 00:13:54.834 ===================================================== 00:13:54.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.834 ===================================================== 00:13:54.834 Controller Capabilities/Features 00:13:54.834 ================================ 00:13:54.834 Vendor ID: 8086 00:13:54.834 Subsystem Vendor ID: 8086 00:13:54.834 Serial Number: SPDK00000000000001 00:13:54.834 Model Number: SPDK bdev Controller 00:13:54.834 Firmware Version: 24.01.1 00:13:54.834 Recommended Arb Burst: 6 00:13:54.834 IEEE OUI Identifier: e4 d2 5c 00:13:54.834 Multi-path I/O 00:13:54.834 May have multiple subsystem ports: Yes 00:13:54.834 May have multiple controllers: Yes 00:13:54.834 Associated with SR-IOV VF: No 00:13:54.834 Max Data Transfer Size: 131072 00:13:54.834 Max Number of Namespaces: 32 00:13:54.834 Max Number of I/O Queues: 127 00:13:54.834 NVMe Specification Version (VS): 1.3 00:13:54.834 NVMe Specification Version (Identify): 1.3 00:13:54.834 Maximum Queue Entries: 128 00:13:54.834 Contiguous Queues Required: Yes 00:13:54.834 Arbitration Mechanisms Supported 00:13:54.834 Weighted Round Robin: Not Supported 00:13:54.834 Vendor Specific: Not Supported 00:13:54.834 Reset Timeout: 15000 ms 00:13:54.834 Doorbell Stride: 4 bytes 00:13:54.834 NVM Subsystem Reset: Not Supported 00:13:54.834 Command Sets Supported 00:13:54.834 NVM Command Set: Supported 00:13:54.834 Boot Partition: Not Supported 00:13:54.834 Memory Page Size Minimum: 4096 bytes 00:13:54.834 Memory Page Size Maximum: 4096 bytes 00:13:54.834 Persistent Memory Region: Not Supported 00:13:54.834 Optional Asynchronous Events Supported 00:13:54.834 Namespace Attribute Notices: Supported 00:13:54.834 Firmware Activation Notices: Not Supported 00:13:54.834 ANA Change Notices: Not Supported 00:13:54.834 PLE Aggregate Log Change Notices: Not Supported 00:13:54.834 LBA Status Info Alert Notices: Not Supported 00:13:54.834 EGE Aggregate Log Change Notices: Not Supported 00:13:54.834 Normal NVM Subsystem Shutdown event: Not Supported 00:13:54.834 Zone Descriptor Change Notices: Not Supported 00:13:54.834 Discovery Log Change Notices: Not Supported 00:13:54.834 Controller Attributes 00:13:54.834 128-bit Host Identifier: Supported 00:13:54.834 Non-Operational Permissive Mode: Not Supported 00:13:54.834 NVM Sets: Not Supported 00:13:54.834 Read Recovery Levels: Not Supported 00:13:54.834 Endurance Groups: Not Supported 00:13:54.834 Predictable Latency Mode: Not Supported 00:13:54.834 Traffic Based Keep ALive: Not Supported 00:13:54.834 Namespace Granularity: Not Supported 00:13:54.834 SQ Associations: Not Supported 00:13:54.834 UUID List: Not Supported 00:13:54.834 Multi-Domain Subsystem: Not Supported 00:13:54.834 Fixed Capacity Management: Not Supported 00:13:54.834 Variable Capacity Management: Not Supported 00:13:54.834 Delete Endurance Group: Not Supported 00:13:54.834 Delete NVM Set: Not Supported 00:13:54.834 Extended LBA Formats Supported: Not Supported 00:13:54.834 Flexible Data Placement Supported: Not Supported 00:13:54.834 00:13:54.834 Controller Memory Buffer Support 00:13:54.834 ================================ 00:13:54.834 Supported: No 00:13:54.834 00:13:54.834 Persistent Memory Region Support 00:13:54.834 ================================ 00:13:54.834 Supported: No 00:13:54.834 00:13:54.834 Admin Command Set Attributes 00:13:54.834 ============================ 00:13:54.834 Security Send/Receive: Not Supported 00:13:54.834 Format NVM: Not Supported 00:13:54.834 Firmware Activate/Download: Not Supported 00:13:54.834 Namespace Management: Not Supported 00:13:54.834 Device Self-Test: Not Supported 00:13:54.834 Directives: Not Supported 00:13:54.834 NVMe-MI: Not Supported 00:13:54.834 Virtualization Management: Not Supported 00:13:54.834 Doorbell Buffer Config: Not Supported 00:13:54.834 Get LBA Status Capability: Not Supported 00:13:54.834 Command & Feature Lockdown Capability: Not Supported 00:13:54.834 Abort Command Limit: 4 00:13:54.834 Async Event Request Limit: 4 00:13:54.834 Number of Firmware Slots: N/A 00:13:54.834 Firmware Slot 1 Read-Only: N/A 00:13:54.834 Firmware Activation Without Reset: N/A 00:13:54.834 Multiple Update Detection Support: N/A 00:13:54.834 Firmware Update Granularity: No Information Provided 00:13:54.834 Per-Namespace SMART Log: No 00:13:54.834 Asymmetric Namespace Access Log Page: Not Supported 00:13:54.834 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:54.834 Command Effects Log Page: Supported 00:13:54.834 Get Log Page Extended Data: Supported 00:13:54.834 Telemetry Log Pages: Not Supported 00:13:54.834 Persistent Event Log Pages: Not Supported 00:13:54.834 Supported Log Pages Log Page: May Support 00:13:54.834 Commands Supported & Effects Log Page: Not Supported 00:13:54.834 Feature Identifiers & Effects Log Page:May Support 00:13:54.834 NVMe-MI Commands & Effects Log Page: May Support 00:13:54.834 Data Area 4 for Telemetry Log: Not Supported 00:13:54.834 Error Log Page Entries Supported: 128 00:13:54.834 Keep Alive: Supported 00:13:54.834 Keep Alive Granularity: 10000 ms 00:13:54.834 00:13:54.834 NVM Command Set Attributes 00:13:54.834 ========================== 00:13:54.834 Submission Queue Entry Size 00:13:54.834 Max: 64 00:13:54.834 Min: 64 00:13:54.834 Completion Queue Entry Size 00:13:54.834 Max: 16 00:13:54.834 Min: 16 00:13:54.834 Number of Namespaces: 32 00:13:54.834 Compare Command: Supported 00:13:54.834 Write Uncorrectable Command: Not Supported 00:13:54.834 Dataset Management Command: Supported 00:13:54.834 Write Zeroes Command: Supported 00:13:54.834 Set Features Save Field: Not Supported 00:13:54.834 Reservations: Supported 00:13:54.834 Timestamp: Not Supported 00:13:54.834 Copy: Supported 00:13:54.834 Volatile Write Cache: Present 00:13:54.834 Atomic Write Unit (Normal): 1 00:13:54.834 Atomic Write Unit (PFail): 1 00:13:54.834 Atomic Compare & Write Unit: 1 00:13:54.834 Fused Compare & Write: Supported 00:13:54.834 Scatter-Gather List 00:13:54.834 SGL Command Set: Supported 00:13:54.834 SGL Keyed: Supported 00:13:54.834 SGL Bit Bucket Descriptor: Not Supported 00:13:54.834 SGL Metadata Pointer: Not Supported 00:13:54.834 Oversized SGL: Not Supported 00:13:54.834 SGL Metadata Address: Not Supported 00:13:54.834 SGL Offset: Supported 00:13:54.834 Transport SGL Data Block: Not Supported 00:13:54.834 Replay Protected Memory Block: Not Supported 00:13:54.834 00:13:54.834 Firmware Slot Information 00:13:54.834 ========================= 00:13:54.834 Active slot: 1 00:13:54.834 Slot 1 Firmware Revision: 24.01.1 00:13:54.834 00:13:54.834 00:13:54.834 Commands Supported and Effects 00:13:54.834 ============================== 00:13:54.834 Admin Commands 00:13:54.834 -------------- 00:13:54.834 Get Log Page (02h): Supported 00:13:54.834 Identify (06h): Supported 00:13:54.834 Abort (08h): Supported 00:13:54.834 Set Features (09h): Supported 00:13:54.834 Get Features (0Ah): Supported 00:13:54.834 Asynchronous Event Request (0Ch): Supported 00:13:54.834 Keep Alive (18h): Supported 00:13:54.834 I/O Commands 00:13:54.834 ------------ 00:13:54.834 Flush (00h): Supported LBA-Change 00:13:54.834 Write (01h): Supported LBA-Change 00:13:54.834 Read (02h): Supported 00:13:54.834 Compare (05h): Supported 00:13:54.834 Write Zeroes (08h): Supported LBA-Change 00:13:54.834 Dataset Management (09h): Supported LBA-Change 00:13:54.834 Copy (19h): Supported LBA-Change 00:13:54.834 Unknown (79h): Supported LBA-Change 00:13:54.834 Unknown (7Ah): Supported 00:13:54.834 00:13:54.834 Error Log 00:13:54.834 ========= 00:13:54.834 00:13:54.834 Arbitration 00:13:54.834 =========== 00:13:54.834 Arbitration Burst: 1 00:13:54.834 00:13:54.834 Power Management 00:13:54.834 ================ 00:13:54.834 Number of Power States: 1 00:13:54.834 Current Power State: Power State #0 00:13:54.834 Power State #0: 00:13:54.834 Max Power: 0.00 W 00:13:54.834 Non-Operational State: Operational 00:13:54.834 Entry Latency: Not Reported 00:13:54.834 Exit Latency: Not Reported 00:13:54.834 Relative Read Throughput: 0 00:13:54.834 Relative Read Latency: 0 00:13:54.834 Relative Write Throughput: 0 00:13:54.834 Relative Write Latency: 0 00:13:54.834 Idle Power: Not Reported 00:13:54.834 Active Power: Not Reported 00:13:54.834 Non-Operational Permissive Mode: Not Supported 00:13:54.834 00:13:54.834 Health Information 00:13:54.834 ================== 00:13:54.835 Critical Warnings: 00:13:54.835 Available Spare Space: OK 00:13:54.835 Temperature: OK 00:13:54.835 Device Reliability: OK 00:13:54.835 Read Only: No 00:13:54.835 Volatile Memory Backup: OK 00:13:54.835 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:54.835 Temperature Threshold: [2024-11-17 09:04:31.552314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.552333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.552355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe8d0, cid 7, qid 0 00:13:54.835 [2024-11-17 09:04:31.552406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.552413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.552416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552420] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe8d0) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.552456] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:54.835 [2024-11-17 09:04:31.552470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.835 [2024-11-17 09:04:31.552477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.835 [2024-11-17 09:04:31.552483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.835 [2024-11-17 09:04:31.552490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.835 [2024-11-17 09:04:31.552498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.552514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.552536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.552586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.552593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.552597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.552601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.556626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556644] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.556658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.556693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.556766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.556774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.556778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.556789] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:54.835 [2024-11-17 09:04:31.556794] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:54.835 [2024-11-17 09:04:31.556805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.556821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.556839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.556893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.556900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.556904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.556921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.556930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.556937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.556954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.557008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.557015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.557019] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.557034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.557050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.557067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.557110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.557117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.557120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557124] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.557136] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.557169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.557186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.557233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.557240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.557244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.557260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557269] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.557277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.835 [2024-11-17 09:04:31.557305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.835 [2024-11-17 09:04:31.557360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.835 [2024-11-17 09:04:31.557367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.835 [2024-11-17 09:04:31.557371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.835 [2024-11-17 09:04:31.557387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.835 [2024-11-17 09:04:31.557396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.835 [2024-11-17 09:04:31.557404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.557421] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.557474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.557481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.557485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.557501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.557517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.557534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.557581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.557602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.557623] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.557651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.557670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.557690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.557751] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.557758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.557762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557767] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.557778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.557795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.557814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.557864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.557871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.557875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.557891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557896] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.557908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.557925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.557971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.557978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.557982] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.557986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.557998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558317] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558522] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558526] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558542] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558616] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558655] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.836 [2024-11-17 09:04:31.558667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.836 [2024-11-17 09:04:31.558685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.836 [2024-11-17 09:04:31.558735] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.836 [2024-11-17 09:04:31.558742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.836 [2024-11-17 09:04:31.558745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.836 [2024-11-17 09:04:31.558760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.836 [2024-11-17 09:04:31.558765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.558776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.558792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.558836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.558842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.558846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.558861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.558876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.558892] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.558939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.558945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.558949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558954] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.558965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.558973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.558980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.558997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559072] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559155] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559202] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559465] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559577] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559820] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.559902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.837 [2024-11-17 09:04:31.559917] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.559926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.837 [2024-11-17 09:04:31.559933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.837 [2024-11-17 09:04:31.559949] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.837 [2024-11-17 09:04:31.559990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.837 [2024-11-17 09:04:31.559996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.837 [2024-11-17 09:04:31.560000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.837 [2024-11-17 09:04:31.560004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560020] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.560093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.560100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.560104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.560194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.560209] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.560213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.560306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.560313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.560316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560331] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.560407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.560414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.560417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.560511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.560526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.560530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560534] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.560546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.560554] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.560562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.560579] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.564627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.564648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.564653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.564657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.564671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.564676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.564680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x219fd30) 00:13:54.838 [2024-11-17 09:04:31.564688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:54.838 [2024-11-17 09:04:31.564712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fe350, cid 3, qid 0 00:13:54.838 [2024-11-17 09:04:31.564779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:54.838 [2024-11-17 09:04:31.564786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:54.838 [2024-11-17 09:04:31.564789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:54.838 [2024-11-17 09:04:31.564793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fe350) on tqpair=0x219fd30 00:13:54.838 [2024-11-17 09:04:31.564802] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:13:54.838 0 Kelvin (-273 Celsius) 00:13:54.838 Available Spare: 0% 00:13:54.838 Available Spare Threshold: 0% 00:13:54.838 Life Percentage Used: 0% 00:13:54.838 Data Units Read: 0 00:13:54.838 Data Units Written: 0 00:13:54.838 Host Read Commands: 0 00:13:54.838 Host Write Commands: 0 00:13:54.838 Controller Busy Time: 0 minutes 00:13:54.838 Power Cycles: 0 00:13:54.838 Power On Hours: 0 hours 00:13:54.838 Unsafe Shutdowns: 0 00:13:54.838 Unrecoverable Media Errors: 0 00:13:54.838 Lifetime Error Log Entries: 0 00:13:54.838 Warning Temperature Time: 0 minutes 00:13:54.838 Critical Temperature Time: 0 minutes 00:13:54.838 00:13:54.838 Number of Queues 00:13:54.838 ================ 00:13:54.838 Number of I/O Submission Queues: 127 00:13:54.838 Number of I/O Completion Queues: 127 00:13:54.838 00:13:54.838 Active Namespaces 00:13:54.838 ================= 00:13:54.838 Namespace ID:1 00:13:54.838 Error Recovery Timeout: Unlimited 00:13:54.838 Command Set Identifier: NVM (00h) 00:13:54.838 Deallocate: Supported 00:13:54.838 Deallocated/Unwritten Error: Not Supported 00:13:54.838 Deallocated Read Value: Unknown 00:13:54.838 Deallocate in Write Zeroes: Not Supported 00:13:54.838 Deallocated Guard Field: 0xFFFF 00:13:54.838 Flush: Supported 00:13:54.838 Reservation: Supported 00:13:54.838 Namespace Sharing Capabilities: Multiple Controllers 00:13:54.838 Size (in LBAs): 131072 (0GiB) 00:13:54.838 Capacity (in LBAs): 131072 (0GiB) 00:13:54.838 Utilization (in LBAs): 131072 (0GiB) 00:13:54.838 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:54.838 EUI64: ABCDEF0123456789 00:13:54.838 UUID: b5272449-8cd1-496d-94e4-0e11132a47c6 00:13:54.838 Thin Provisioning: Not Supported 00:13:54.838 Per-NS Atomic Units: Yes 00:13:54.838 Atomic Boundary Size (Normal): 0 00:13:54.838 Atomic Boundary Size (PFail): 0 00:13:54.838 Atomic Boundary Offset: 0 00:13:54.838 Maximum Single Source Range Length: 65535 00:13:54.838 Maximum Copy Length: 65535 00:13:54.838 Maximum Source Range Count: 1 00:13:54.838 NGUID/EUI64 Never Reused: No 00:13:54.838 Namespace Write Protected: No 00:13:54.838 Number of LBA Formats: 1 00:13:54.838 Current LBA Format: LBA Format #00 00:13:54.838 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:54.838 00:13:54.838 09:04:31 -- host/identify.sh@51 -- # sync 00:13:54.838 09:04:31 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.838 09:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.838 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.838 09:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.838 09:04:31 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:54.838 09:04:31 -- host/identify.sh@56 -- # nvmftestfini 00:13:54.838 09:04:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:54.838 09:04:31 -- nvmf/common.sh@116 -- # sync 00:13:54.838 09:04:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:54.838 09:04:31 -- nvmf/common.sh@119 -- # set +e 00:13:54.838 09:04:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:54.838 09:04:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:54.838 rmmod nvme_tcp 00:13:54.838 rmmod nvme_fabrics 00:13:54.838 rmmod nvme_keyring 00:13:54.838 09:04:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:54.838 09:04:31 -- nvmf/common.sh@123 -- # set -e 00:13:54.838 09:04:31 -- nvmf/common.sh@124 -- # return 0 00:13:54.838 09:04:31 -- nvmf/common.sh@477 -- # '[' -n 68501 ']' 00:13:54.838 09:04:31 -- nvmf/common.sh@478 -- # killprocess 68501 00:13:54.839 09:04:31 -- common/autotest_common.sh@936 -- # '[' -z 68501 ']' 00:13:54.839 09:04:31 -- common/autotest_common.sh@940 -- # kill -0 68501 00:13:54.839 09:04:31 -- common/autotest_common.sh@941 -- # uname 00:13:54.839 09:04:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:54.839 09:04:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68501 00:13:54.839 09:04:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:54.839 09:04:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:54.839 09:04:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68501' 00:13:54.839 killing process with pid 68501 00:13:54.839 09:04:31 -- common/autotest_common.sh@955 -- # kill 68501 00:13:54.839 [2024-11-17 09:04:31.726844] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:54.839 09:04:31 -- common/autotest_common.sh@960 -- # wait 68501 00:13:55.099 09:04:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.099 09:04:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.099 09:04:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.099 09:04:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.099 09:04:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.099 09:04:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.099 09:04:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.099 09:04:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.099 09:04:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:55.099 00:13:55.099 real 0m2.603s 00:13:55.099 user 0m7.175s 00:13:55.099 sys 0m0.609s 00:13:55.099 09:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:55.099 ************************************ 00:13:55.099 END TEST nvmf_identify 00:13:55.099 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:55.099 ************************************ 00:13:55.099 09:04:32 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:55.099 09:04:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:55.099 09:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.099 09:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:55.359 ************************************ 00:13:55.359 START TEST nvmf_perf 00:13:55.359 ************************************ 00:13:55.359 09:04:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:55.359 * Looking for test storage... 00:13:55.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:55.359 09:04:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:55.359 09:04:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:55.359 09:04:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:55.359 09:04:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:55.359 09:04:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:55.359 09:04:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:55.359 09:04:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:55.359 09:04:32 -- scripts/common.sh@335 -- # IFS=.-: 00:13:55.359 09:04:32 -- scripts/common.sh@335 -- # read -ra ver1 00:13:55.359 09:04:32 -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.359 09:04:32 -- scripts/common.sh@336 -- # read -ra ver2 00:13:55.359 09:04:32 -- scripts/common.sh@337 -- # local 'op=<' 00:13:55.359 09:04:32 -- scripts/common.sh@339 -- # ver1_l=2 00:13:55.359 09:04:32 -- scripts/common.sh@340 -- # ver2_l=1 00:13:55.359 09:04:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:55.359 09:04:32 -- scripts/common.sh@343 -- # case "$op" in 00:13:55.359 09:04:32 -- scripts/common.sh@344 -- # : 1 00:13:55.359 09:04:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:55.359 09:04:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.359 09:04:32 -- scripts/common.sh@364 -- # decimal 1 00:13:55.359 09:04:32 -- scripts/common.sh@352 -- # local d=1 00:13:55.359 09:04:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.359 09:04:32 -- scripts/common.sh@354 -- # echo 1 00:13:55.359 09:04:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:55.359 09:04:32 -- scripts/common.sh@365 -- # decimal 2 00:13:55.359 09:04:32 -- scripts/common.sh@352 -- # local d=2 00:13:55.359 09:04:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.359 09:04:32 -- scripts/common.sh@354 -- # echo 2 00:13:55.359 09:04:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:55.359 09:04:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:55.359 09:04:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:55.359 09:04:32 -- scripts/common.sh@367 -- # return 0 00:13:55.359 09:04:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.359 09:04:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.359 --rc genhtml_branch_coverage=1 00:13:55.359 --rc genhtml_function_coverage=1 00:13:55.359 --rc genhtml_legend=1 00:13:55.359 --rc geninfo_all_blocks=1 00:13:55.359 --rc geninfo_unexecuted_blocks=1 00:13:55.359 00:13:55.359 ' 00:13:55.359 09:04:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.359 --rc genhtml_branch_coverage=1 00:13:55.359 --rc genhtml_function_coverage=1 00:13:55.359 --rc genhtml_legend=1 00:13:55.359 --rc geninfo_all_blocks=1 00:13:55.359 --rc geninfo_unexecuted_blocks=1 00:13:55.359 00:13:55.359 ' 00:13:55.359 09:04:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.359 --rc genhtml_branch_coverage=1 00:13:55.359 --rc genhtml_function_coverage=1 00:13:55.359 --rc genhtml_legend=1 00:13:55.359 --rc geninfo_all_blocks=1 00:13:55.359 --rc geninfo_unexecuted_blocks=1 00:13:55.359 00:13:55.359 ' 00:13:55.359 09:04:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.359 --rc genhtml_branch_coverage=1 00:13:55.359 --rc genhtml_function_coverage=1 00:13:55.359 --rc genhtml_legend=1 00:13:55.359 --rc geninfo_all_blocks=1 00:13:55.359 --rc geninfo_unexecuted_blocks=1 00:13:55.359 00:13:55.359 ' 00:13:55.359 09:04:32 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.359 09:04:32 -- nvmf/common.sh@7 -- # uname -s 00:13:55.359 09:04:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.359 09:04:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.359 09:04:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.359 09:04:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.359 09:04:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.359 09:04:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.359 09:04:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.359 09:04:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.359 09:04:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.359 09:04:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.359 09:04:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:13:55.359 09:04:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:13:55.359 09:04:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.359 09:04:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.359 09:04:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:55.359 09:04:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.359 09:04:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.359 09:04:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.359 09:04:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.359 09:04:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.359 09:04:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.359 09:04:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.359 09:04:32 -- paths/export.sh@5 -- # export PATH 00:13:55.359 09:04:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.359 09:04:32 -- nvmf/common.sh@46 -- # : 0 00:13:55.359 09:04:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:55.359 09:04:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:55.359 09:04:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:55.359 09:04:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.359 09:04:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.359 09:04:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:55.359 09:04:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:55.359 09:04:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:55.359 09:04:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:55.359 09:04:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:55.359 09:04:32 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:55.359 09:04:32 -- host/perf.sh@17 -- # nvmftestinit 00:13:55.359 09:04:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:55.359 09:04:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.359 09:04:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:55.359 09:04:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:55.359 09:04:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:55.359 09:04:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.359 09:04:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.359 09:04:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.360 09:04:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:55.360 09:04:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:55.360 09:04:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:55.360 09:04:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:55.360 09:04:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:55.360 09:04:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:55.360 09:04:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.360 09:04:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.360 09:04:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:55.360 09:04:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:55.360 09:04:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:55.360 09:04:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:55.360 09:04:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:55.360 09:04:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.360 09:04:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:55.360 09:04:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:55.360 09:04:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:55.360 09:04:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:55.360 09:04:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:55.360 09:04:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:55.360 Cannot find device "nvmf_tgt_br" 00:13:55.360 09:04:32 -- nvmf/common.sh@154 -- # true 00:13:55.360 09:04:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.360 Cannot find device "nvmf_tgt_br2" 00:13:55.360 09:04:32 -- nvmf/common.sh@155 -- # true 00:13:55.360 09:04:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:55.360 09:04:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:55.360 Cannot find device "nvmf_tgt_br" 00:13:55.360 09:04:32 -- nvmf/common.sh@157 -- # true 00:13:55.360 09:04:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:55.360 Cannot find device "nvmf_tgt_br2" 00:13:55.360 09:04:32 -- nvmf/common.sh@158 -- # true 00:13:55.360 09:04:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:55.617 09:04:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:55.617 09:04:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.617 09:04:32 -- nvmf/common.sh@161 -- # true 00:13:55.617 09:04:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.617 09:04:32 -- nvmf/common.sh@162 -- # true 00:13:55.617 09:04:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:55.617 09:04:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:55.617 09:04:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:55.617 09:04:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:55.617 09:04:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:55.617 09:04:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:55.617 09:04:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:55.617 09:04:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:55.617 09:04:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:55.617 09:04:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:55.617 09:04:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:55.617 09:04:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:55.617 09:04:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:55.617 09:04:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:55.617 09:04:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:55.617 09:04:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:55.617 09:04:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:55.617 09:04:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:55.617 09:04:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:55.617 09:04:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:55.617 09:04:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:55.617 09:04:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:55.617 09:04:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:55.617 09:04:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:55.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:55.617 00:13:55.617 --- 10.0.0.2 ping statistics --- 00:13:55.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.617 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:55.617 09:04:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:55.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:55.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:55.618 00:13:55.618 --- 10.0.0.3 ping statistics --- 00:13:55.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.618 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:55.618 09:04:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:55.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:55.618 00:13:55.618 --- 10.0.0.1 ping statistics --- 00:13:55.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.618 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:55.618 09:04:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.618 09:04:32 -- nvmf/common.sh@421 -- # return 0 00:13:55.618 09:04:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:55.618 09:04:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.618 09:04:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:55.618 09:04:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:55.618 09:04:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.618 09:04:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:55.618 09:04:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:55.876 09:04:32 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:55.876 09:04:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:55.876 09:04:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.876 09:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:55.876 09:04:32 -- nvmf/common.sh@469 -- # nvmfpid=68726 00:13:55.876 09:04:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.876 09:04:32 -- nvmf/common.sh@470 -- # waitforlisten 68726 00:13:55.876 09:04:32 -- common/autotest_common.sh@829 -- # '[' -z 68726 ']' 00:13:55.876 09:04:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.876 09:04:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.876 09:04:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.876 09:04:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.876 09:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:55.876 [2024-11-17 09:04:32.627487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:55.876 [2024-11-17 09:04:32.627579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.876 [2024-11-17 09:04:32.767872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.134 [2024-11-17 09:04:32.825496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.135 [2024-11-17 09:04:32.825666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.135 [2024-11-17 09:04:32.825680] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.135 [2024-11-17 09:04:32.825688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.135 [2024-11-17 09:04:32.825808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.135 [2024-11-17 09:04:32.826746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.135 [2024-11-17 09:04:32.826839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.135 [2024-11-17 09:04:32.826844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.069 09:04:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.069 09:04:33 -- common/autotest_common.sh@862 -- # return 0 00:13:57.069 09:04:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:57.069 09:04:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.069 09:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:57.069 09:04:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.069 09:04:33 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:57.069 09:04:33 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:57.325 09:04:34 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:57.325 09:04:34 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:57.584 09:04:34 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:57.584 09:04:34 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.843 09:04:34 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:57.843 09:04:34 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:57.843 09:04:34 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:57.843 09:04:34 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:57.843 09:04:34 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:58.101 [2024-11-17 09:04:34.833847] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.101 09:04:34 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.360 09:04:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:58.360 09:04:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.618 09:04:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:58.618 09:04:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:58.877 09:04:35 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.877 [2024-11-17 09:04:35.747121] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.877 09:04:35 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.136 09:04:35 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:59.136 09:04:35 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:59.136 09:04:35 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:59.136 09:04:35 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:00.512 Initializing NVMe Controllers 00:14:00.512 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:00.512 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:00.512 Initialization complete. Launching workers. 00:14:00.512 ======================================================== 00:14:00.512 Latency(us) 00:14:00.512 Device Information : IOPS MiB/s Average min max 00:14:00.512 PCIE (0000:00:06.0) NSID 1 from core 0: 23558.12 92.02 1357.82 392.49 8073.64 00:14:00.512 ======================================================== 00:14:00.512 Total : 23558.12 92.02 1357.82 392.49 8073.64 00:14:00.512 00:14:00.512 09:04:37 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:01.888 Initializing NVMe Controllers 00:14:01.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:01.889 Initialization complete. Launching workers. 00:14:01.889 ======================================================== 00:14:01.889 Latency(us) 00:14:01.889 Device Information : IOPS MiB/s Average min max 00:14:01.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3779.84 14.77 264.23 100.12 7155.43 00:14:01.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8144.67 5921.94 15004.09 00:14:01.889 ======================================================== 00:14:01.889 Total : 3903.35 15.25 513.57 100.12 15004.09 00:14:01.889 00:14:01.889 09:04:38 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:03.287 Initializing NVMe Controllers 00:14:03.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:03.287 Initialization complete. Launching workers. 00:14:03.287 ======================================================== 00:14:03.287 Latency(us) 00:14:03.287 Device Information : IOPS MiB/s Average min max 00:14:03.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9173.88 35.84 3488.70 475.82 7382.69 00:14:03.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3967.95 15.50 8065.79 6841.07 14453.85 00:14:03.287 ======================================================== 00:14:03.287 Total : 13141.83 51.34 4870.67 475.82 14453.85 00:14:03.287 00:14:03.287 09:04:39 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:03.287 09:04:39 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:05.819 Initializing NVMe Controllers 00:14:05.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.819 Controller IO queue size 128, less than required. 00:14:05.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.819 Controller IO queue size 128, less than required. 00:14:05.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:05.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:05.819 Initialization complete. Launching workers. 00:14:05.819 ======================================================== 00:14:05.819 Latency(us) 00:14:05.819 Device Information : IOPS MiB/s Average min max 00:14:05.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2033.43 508.36 63864.76 27769.79 101013.67 00:14:05.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 664.50 166.12 203981.79 98837.61 332766.97 00:14:05.819 ======================================================== 00:14:05.819 Total : 2697.92 674.48 98375.45 27769.79 332766.97 00:14:05.819 00:14:05.819 09:04:42 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:05.819 No valid NVMe controllers or AIO or URING devices found 00:14:05.819 Initializing NVMe Controllers 00:14:05.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.819 Controller IO queue size 128, less than required. 00:14:05.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.819 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:05.819 Controller IO queue size 128, less than required. 00:14:05.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.819 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:05.819 WARNING: Some requested NVMe devices were skipped 00:14:05.819 09:04:42 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:08.354 Initializing NVMe Controllers 00:14:08.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.354 Controller IO queue size 128, less than required. 00:14:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.354 Controller IO queue size 128, less than required. 00:14:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:08.354 Initialization complete. Launching workers. 00:14:08.354 00:14:08.354 ==================== 00:14:08.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:08.354 TCP transport: 00:14:08.354 polls: 8293 00:14:08.354 idle_polls: 0 00:14:08.354 sock_completions: 8293 00:14:08.354 nvme_completions: 6766 00:14:08.354 submitted_requests: 10209 00:14:08.354 queued_requests: 1 00:14:08.354 00:14:08.354 ==================== 00:14:08.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:08.354 TCP transport: 00:14:08.354 polls: 8634 00:14:08.354 idle_polls: 0 00:14:08.354 sock_completions: 8634 00:14:08.354 nvme_completions: 6815 00:14:08.354 submitted_requests: 10435 00:14:08.354 queued_requests: 1 00:14:08.354 ======================================================== 00:14:08.354 Latency(us) 00:14:08.354 Device Information : IOPS MiB/s Average min max 00:14:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.55 438.39 74869.21 40534.22 136204.34 00:14:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1765.54 441.38 72826.79 34319.17 119361.26 00:14:08.354 ======================================================== 00:14:08.354 Total : 3519.08 879.77 73844.52 34319.17 136204.34 00:14:08.354 00:14:08.354 09:04:45 -- host/perf.sh@66 -- # sync 00:14:08.354 09:04:45 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.614 09:04:45 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:08.614 09:04:45 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:08.614 09:04:45 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:08.873 09:04:45 -- host/perf.sh@72 -- # ls_guid=2ed3cc1f-18e7-411d-a348-5a92ebac0d8a 00:14:08.873 09:04:45 -- host/perf.sh@73 -- # get_lvs_free_mb 2ed3cc1f-18e7-411d-a348-5a92ebac0d8a 00:14:08.873 09:04:45 -- common/autotest_common.sh@1353 -- # local lvs_uuid=2ed3cc1f-18e7-411d-a348-5a92ebac0d8a 00:14:08.873 09:04:45 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:08.873 09:04:45 -- common/autotest_common.sh@1355 -- # local fc 00:14:08.873 09:04:45 -- common/autotest_common.sh@1356 -- # local cs 00:14:08.873 09:04:45 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:09.132 09:04:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:09.132 { 00:14:09.132 "uuid": "2ed3cc1f-18e7-411d-a348-5a92ebac0d8a", 00:14:09.132 "name": "lvs_0", 00:14:09.132 "base_bdev": "Nvme0n1", 00:14:09.132 "total_data_clusters": 1278, 00:14:09.132 "free_clusters": 1278, 00:14:09.132 "block_size": 4096, 00:14:09.132 "cluster_size": 4194304 00:14:09.132 } 00:14:09.132 ]' 00:14:09.132 09:04:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="2ed3cc1f-18e7-411d-a348-5a92ebac0d8a") .free_clusters' 00:14:09.391 09:04:46 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:09.391 09:04:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="2ed3cc1f-18e7-411d-a348-5a92ebac0d8a") .cluster_size' 00:14:09.391 5112 00:14:09.391 09:04:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:09.391 09:04:46 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:09.391 09:04:46 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:09.391 09:04:46 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:09.391 09:04:46 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ed3cc1f-18e7-411d-a348-5a92ebac0d8a lbd_0 5112 00:14:09.651 09:04:46 -- host/perf.sh@80 -- # lb_guid=52a22e32-0dc2-4807-9c7e-e87502a9f2cc 00:14:09.651 09:04:46 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 52a22e32-0dc2-4807-9c7e-e87502a9f2cc lvs_n_0 00:14:09.910 09:04:46 -- host/perf.sh@83 -- # ls_nested_guid=ede92bef-4323-40e1-9bbf-a227b07d1f68 00:14:09.910 09:04:46 -- host/perf.sh@84 -- # get_lvs_free_mb ede92bef-4323-40e1-9bbf-a227b07d1f68 00:14:09.910 09:04:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ede92bef-4323-40e1-9bbf-a227b07d1f68 00:14:09.910 09:04:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:09.910 09:04:46 -- common/autotest_common.sh@1355 -- # local fc 00:14:09.910 09:04:46 -- common/autotest_common.sh@1356 -- # local cs 00:14:09.910 09:04:46 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:10.169 09:04:47 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:10.169 { 00:14:10.169 "uuid": "2ed3cc1f-18e7-411d-a348-5a92ebac0d8a", 00:14:10.169 "name": "lvs_0", 00:14:10.169 "base_bdev": "Nvme0n1", 00:14:10.169 "total_data_clusters": 1278, 00:14:10.169 "free_clusters": 0, 00:14:10.169 "block_size": 4096, 00:14:10.169 "cluster_size": 4194304 00:14:10.169 }, 00:14:10.169 { 00:14:10.169 "uuid": "ede92bef-4323-40e1-9bbf-a227b07d1f68", 00:14:10.169 "name": "lvs_n_0", 00:14:10.169 "base_bdev": "52a22e32-0dc2-4807-9c7e-e87502a9f2cc", 00:14:10.169 "total_data_clusters": 1276, 00:14:10.169 "free_clusters": 1276, 00:14:10.169 "block_size": 4096, 00:14:10.169 "cluster_size": 4194304 00:14:10.169 } 00:14:10.169 ]' 00:14:10.169 09:04:47 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ede92bef-4323-40e1-9bbf-a227b07d1f68") .free_clusters' 00:14:10.169 09:04:47 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:10.169 09:04:47 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ede92bef-4323-40e1-9bbf-a227b07d1f68") .cluster_size' 00:14:10.428 5104 00:14:10.428 09:04:47 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:10.428 09:04:47 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:10.428 09:04:47 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:10.428 09:04:47 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:10.428 09:04:47 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ede92bef-4323-40e1-9bbf-a227b07d1f68 lbd_nest_0 5104 00:14:10.428 09:04:47 -- host/perf.sh@88 -- # lb_nested_guid=0b7592b2-5493-4fd4-8d0c-a9957b2e9e53 00:14:10.428 09:04:47 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.686 09:04:47 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:10.686 09:04:47 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0b7592b2-5493-4fd4-8d0c-a9957b2e9e53 00:14:10.946 09:04:47 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.205 09:04:48 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:11.205 09:04:48 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:11.205 09:04:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:11.205 09:04:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:11.205 09:04:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:11.465 No valid NVMe controllers or AIO or URING devices found 00:14:11.465 Initializing NVMe Controllers 00:14:11.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.465 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:11.465 WARNING: Some requested NVMe devices were skipped 00:14:11.465 09:04:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:11.465 09:04:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:23.757 Initializing NVMe Controllers 00:14:23.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:23.757 Initialization complete. Launching workers. 00:14:23.757 ======================================================== 00:14:23.757 Latency(us) 00:14:23.757 Device Information : IOPS MiB/s Average min max 00:14:23.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 984.79 123.10 1015.02 321.37 8555.66 00:14:23.757 ======================================================== 00:14:23.758 Total : 984.79 123.10 1015.02 321.37 8555.66 00:14:23.758 00:14:23.758 09:04:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:23.758 09:04:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:23.758 09:04:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:23.758 No valid NVMe controllers or AIO or URING devices found 00:14:23.758 Initializing NVMe Controllers 00:14:23.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.758 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:23.758 WARNING: Some requested NVMe devices were skipped 00:14:23.758 09:04:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:23.758 09:04:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:33.735 Initializing NVMe Controllers 00:14:33.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.735 Initialization complete. Launching workers. 00:14:33.735 ======================================================== 00:14:33.735 Latency(us) 00:14:33.735 Device Information : IOPS MiB/s Average min max 00:14:33.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1356.70 169.59 23640.48 7286.58 63032.41 00:14:33.735 ======================================================== 00:14:33.735 Total : 1356.70 169.59 23640.48 7286.58 63032.41 00:14:33.735 00:14:33.735 09:05:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:33.735 09:05:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:33.735 09:05:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:33.735 No valid NVMe controllers or AIO or URING devices found 00:14:33.735 Initializing NVMe Controllers 00:14:33.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.735 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:33.735 WARNING: Some requested NVMe devices were skipped 00:14:33.735 09:05:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:33.735 09:05:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:43.713 Initializing NVMe Controllers 00:14:43.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.713 Controller IO queue size 128, less than required. 00:14:43.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.713 Initialization complete. Launching workers. 00:14:43.713 ======================================================== 00:14:43.713 Latency(us) 00:14:43.713 Device Information : IOPS MiB/s Average min max 00:14:43.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4038.70 504.84 31758.72 11705.19 73575.52 00:14:43.713 ======================================================== 00:14:43.713 Total : 4038.70 504.84 31758.72 11705.19 73575.52 00:14:43.713 00:14:43.713 09:05:19 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.713 09:05:20 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b7592b2-5493-4fd4-8d0c-a9957b2e9e53 00:14:43.713 09:05:20 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:43.971 09:05:20 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 52a22e32-0dc2-4807-9c7e-e87502a9f2cc 00:14:44.230 09:05:21 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:44.489 09:05:21 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:44.489 09:05:21 -- host/perf.sh@114 -- # nvmftestfini 00:14:44.489 09:05:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.489 09:05:21 -- nvmf/common.sh@116 -- # sync 00:14:44.489 09:05:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.489 09:05:21 -- nvmf/common.sh@119 -- # set +e 00:14:44.489 09:05:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.489 09:05:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.489 rmmod nvme_tcp 00:14:44.489 rmmod nvme_fabrics 00:14:44.489 rmmod nvme_keyring 00:14:44.489 09:05:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.489 09:05:21 -- nvmf/common.sh@123 -- # set -e 00:14:44.489 09:05:21 -- nvmf/common.sh@124 -- # return 0 00:14:44.489 09:05:21 -- nvmf/common.sh@477 -- # '[' -n 68726 ']' 00:14:44.489 09:05:21 -- nvmf/common.sh@478 -- # killprocess 68726 00:14:44.489 09:05:21 -- common/autotest_common.sh@936 -- # '[' -z 68726 ']' 00:14:44.489 09:05:21 -- common/autotest_common.sh@940 -- # kill -0 68726 00:14:44.489 09:05:21 -- common/autotest_common.sh@941 -- # uname 00:14:44.489 09:05:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.489 09:05:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68726 00:14:44.489 killing process with pid 68726 00:14:44.489 09:05:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:44.489 09:05:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:44.489 09:05:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68726' 00:14:44.489 09:05:21 -- common/autotest_common.sh@955 -- # kill 68726 00:14:44.489 09:05:21 -- common/autotest_common.sh@960 -- # wait 68726 00:14:46.396 09:05:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.396 09:05:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:46.396 09:05:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:46.396 09:05:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.396 09:05:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:46.396 09:05:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.396 09:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.396 09:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.397 09:05:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:46.397 ************************************ 00:14:46.397 END TEST nvmf_perf 00:14:46.397 ************************************ 00:14:46.397 00:14:46.397 real 0m51.016s 00:14:46.397 user 3m11.239s 00:14:46.397 sys 0m12.450s 00:14:46.397 09:05:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.397 09:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 09:05:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:46.397 09:05:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.397 09:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.397 ************************************ 00:14:46.397 START TEST nvmf_fio_host 00:14:46.397 ************************************ 00:14:46.397 09:05:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:46.397 * Looking for test storage... 00:14:46.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.397 09:05:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.397 09:05:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.397 09:05:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:46.397 09:05:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:46.397 09:05:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:46.397 09:05:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:46.397 09:05:23 -- scripts/common.sh@335 -- # IFS=.-: 00:14:46.397 09:05:23 -- scripts/common.sh@335 -- # read -ra ver1 00:14:46.397 09:05:23 -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.397 09:05:23 -- scripts/common.sh@336 -- # read -ra ver2 00:14:46.397 09:05:23 -- scripts/common.sh@337 -- # local 'op=<' 00:14:46.397 09:05:23 -- scripts/common.sh@339 -- # ver1_l=2 00:14:46.397 09:05:23 -- scripts/common.sh@340 -- # ver2_l=1 00:14:46.397 09:05:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:46.397 09:05:23 -- scripts/common.sh@343 -- # case "$op" in 00:14:46.397 09:05:23 -- scripts/common.sh@344 -- # : 1 00:14:46.397 09:05:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:46.397 09:05:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.397 09:05:23 -- scripts/common.sh@364 -- # decimal 1 00:14:46.397 09:05:23 -- scripts/common.sh@352 -- # local d=1 00:14:46.397 09:05:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.397 09:05:23 -- scripts/common.sh@354 -- # echo 1 00:14:46.397 09:05:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:46.397 09:05:23 -- scripts/common.sh@365 -- # decimal 2 00:14:46.397 09:05:23 -- scripts/common.sh@352 -- # local d=2 00:14:46.397 09:05:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.397 09:05:23 -- scripts/common.sh@354 -- # echo 2 00:14:46.397 09:05:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:46.397 09:05:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:46.397 09:05:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:46.397 09:05:23 -- scripts/common.sh@367 -- # return 0 00:14:46.397 09:05:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:46.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.397 --rc genhtml_branch_coverage=1 00:14:46.397 --rc genhtml_function_coverage=1 00:14:46.397 --rc genhtml_legend=1 00:14:46.397 --rc geninfo_all_blocks=1 00:14:46.397 --rc geninfo_unexecuted_blocks=1 00:14:46.397 00:14:46.397 ' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:46.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.397 --rc genhtml_branch_coverage=1 00:14:46.397 --rc genhtml_function_coverage=1 00:14:46.397 --rc genhtml_legend=1 00:14:46.397 --rc geninfo_all_blocks=1 00:14:46.397 --rc geninfo_unexecuted_blocks=1 00:14:46.397 00:14:46.397 ' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:46.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.397 --rc genhtml_branch_coverage=1 00:14:46.397 --rc genhtml_function_coverage=1 00:14:46.397 --rc genhtml_legend=1 00:14:46.397 --rc geninfo_all_blocks=1 00:14:46.397 --rc geninfo_unexecuted_blocks=1 00:14:46.397 00:14:46.397 ' 00:14:46.397 09:05:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:46.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.397 --rc genhtml_branch_coverage=1 00:14:46.397 --rc genhtml_function_coverage=1 00:14:46.397 --rc genhtml_legend=1 00:14:46.397 --rc geninfo_all_blocks=1 00:14:46.397 --rc geninfo_unexecuted_blocks=1 00:14:46.397 00:14:46.397 ' 00:14:46.397 09:05:23 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.397 09:05:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.397 09:05:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.397 09:05:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.397 09:05:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.397 09:05:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.397 09:05:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.397 09:05:23 -- paths/export.sh@5 -- # export PATH 00:14:46.397 09:05:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.397 09:05:23 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.397 09:05:23 -- nvmf/common.sh@7 -- # uname -s 00:14:46.397 09:05:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.397 09:05:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.397 09:05:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.397 09:05:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.397 09:05:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.397 09:05:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.397 09:05:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.397 09:05:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.397 09:05:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.397 09:05:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.397 09:05:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:14:46.397 09:05:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:14:46.397 09:05:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.397 09:05:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.397 09:05:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.397 09:05:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.397 09:05:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.397 09:05:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.398 09:05:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.398 09:05:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.398 09:05:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.398 09:05:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.398 09:05:23 -- paths/export.sh@5 -- # export PATH 00:14:46.398 09:05:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.398 09:05:23 -- nvmf/common.sh@46 -- # : 0 00:14:46.398 09:05:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:46.398 09:05:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:46.398 09:05:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:46.398 09:05:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.398 09:05:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.398 09:05:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:46.398 09:05:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:46.398 09:05:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:46.398 09:05:23 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.398 09:05:23 -- host/fio.sh@14 -- # nvmftestinit 00:14:46.398 09:05:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:46.398 09:05:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.398 09:05:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:46.398 09:05:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:46.398 09:05:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:46.398 09:05:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.398 09:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.398 09:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.398 09:05:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:46.398 09:05:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:46.398 09:05:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:46.398 09:05:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:46.398 09:05:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:46.398 09:05:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:46.398 09:05:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.398 09:05:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.398 09:05:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.398 09:05:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:46.398 09:05:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.398 09:05:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.398 09:05:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.398 09:05:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.398 09:05:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.398 09:05:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.398 09:05:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.398 09:05:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.398 09:05:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:46.657 09:05:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:46.657 Cannot find device "nvmf_tgt_br" 00:14:46.657 09:05:23 -- nvmf/common.sh@154 -- # true 00:14:46.657 09:05:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.657 Cannot find device "nvmf_tgt_br2" 00:14:46.657 09:05:23 -- nvmf/common.sh@155 -- # true 00:14:46.657 09:05:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:46.658 09:05:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:46.658 Cannot find device "nvmf_tgt_br" 00:14:46.658 09:05:23 -- nvmf/common.sh@157 -- # true 00:14:46.658 09:05:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:46.658 Cannot find device "nvmf_tgt_br2" 00:14:46.658 09:05:23 -- nvmf/common.sh@158 -- # true 00:14:46.658 09:05:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:46.658 09:05:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:46.658 09:05:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.658 09:05:23 -- nvmf/common.sh@161 -- # true 00:14:46.658 09:05:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.658 09:05:23 -- nvmf/common.sh@162 -- # true 00:14:46.658 09:05:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.658 09:05:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.658 09:05:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.658 09:05:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.658 09:05:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.658 09:05:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.658 09:05:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.658 09:05:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.658 09:05:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.658 09:05:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:46.658 09:05:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:46.917 09:05:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:46.917 09:05:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:46.917 09:05:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.917 09:05:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.917 09:05:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.917 09:05:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:46.917 09:05:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:46.917 09:05:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.917 09:05:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.917 09:05:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.917 09:05:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.917 09:05:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.917 09:05:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:46.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:46.917 00:14:46.917 --- 10.0.0.2 ping statistics --- 00:14:46.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.917 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:46.917 09:05:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:46.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:46.917 00:14:46.917 --- 10.0.0.3 ping statistics --- 00:14:46.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.917 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:46.917 09:05:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:46.917 00:14:46.917 --- 10.0.0.1 ping statistics --- 00:14:46.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.917 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:46.917 09:05:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.917 09:05:23 -- nvmf/common.sh@421 -- # return 0 00:14:46.917 09:05:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:46.917 09:05:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.917 09:05:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:46.917 09:05:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:46.917 09:05:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.917 09:05:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:46.917 09:05:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:46.917 09:05:23 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:46.917 09:05:23 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:46.917 09:05:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.917 09:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.917 09:05:23 -- host/fio.sh@24 -- # nvmfpid=69558 00:14:46.917 09:05:23 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.917 09:05:23 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.917 09:05:23 -- host/fio.sh@28 -- # waitforlisten 69558 00:14:46.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.917 09:05:23 -- common/autotest_common.sh@829 -- # '[' -z 69558 ']' 00:14:46.917 09:05:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.917 09:05:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.917 09:05:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.917 09:05:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.917 09:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:46.917 [2024-11-17 09:05:23.762803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:46.917 [2024-11-17 09:05:23.762901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.177 [2024-11-17 09:05:23.902309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.177 [2024-11-17 09:05:23.972110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.177 [2024-11-17 09:05:23.972499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.177 [2024-11-17 09:05:23.972681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.177 [2024-11-17 09:05:23.972939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.177 [2024-11-17 09:05:23.973288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.177 [2024-11-17 09:05:23.973425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.177 [2024-11-17 09:05:23.973506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.177 [2024-11-17 09:05:23.973508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.115 09:05:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.115 09:05:24 -- common/autotest_common.sh@862 -- # return 0 00:14:48.115 09:05:24 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.374 [2024-11-17 09:05:25.081290] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.374 09:05:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:48.374 09:05:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.374 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.374 09:05:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.633 Malloc1 00:14:48.633 09:05:25 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.892 09:05:25 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.150 09:05:25 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.409 [2024-11-17 09:05:26.128560] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.409 09:05:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.667 09:05:26 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:49.667 09:05:26 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.668 09:05:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.668 09:05:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:49.668 09:05:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.668 09:05:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:49.668 09:05:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.668 09:05:26 -- common/autotest_common.sh@1330 -- # shift 00:14:49.668 09:05:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:49.668 09:05:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:49.668 09:05:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:49.668 09:05:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:49.668 09:05:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:49.668 09:05:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:49.668 09:05:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:49.668 09:05:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.668 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:49.668 fio-3.35 00:14:49.668 Starting 1 thread 00:14:52.201 00:14:52.201 test: (groupid=0, jobs=1): err= 0: pid=69636: Sun Nov 17 09:05:28 2024 00:14:52.201 read: IOPS=9635, BW=37.6MiB/s (39.5MB/s)(75.5MiB/2006msec) 00:14:52.201 slat (nsec): min=1919, max=347231, avg=2387.55, stdev=3384.47 00:14:52.201 clat (usec): min=2665, max=12374, avg=6910.80, stdev=562.22 00:14:52.201 lat (usec): min=2725, max=12376, avg=6913.19, stdev=562.14 00:14:52.201 clat percentiles (usec): 00:14:52.201 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:14:52.201 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 6980], 00:14:52.201 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7832], 00:14:52.201 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[10945], 99.95th=[11731], 00:14:52.201 | 99.99th=[12387] 00:14:52.201 bw ( KiB/s): min=37888, max=39552, per=99.96%, avg=38524.00, stdev=732.99, samples=4 00:14:52.201 iops : min= 9472, max= 9888, avg=9631.00, stdev=183.25, samples=4 00:14:52.201 write: IOPS=9642, BW=37.7MiB/s (39.5MB/s)(75.6MiB/2006msec); 0 zone resets 00:14:52.201 slat (nsec): min=1968, max=250535, avg=2491.63, stdev=2437.16 00:14:52.201 clat (usec): min=2517, max=11010, avg=6318.65, stdev=499.71 00:14:52.201 lat (usec): min=2530, max=11012, avg=6321.14, stdev=499.74 00:14:52.201 clat percentiles (usec): 00:14:52.201 | 1.00th=[ 5342], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:14:52.201 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:14:52.201 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7177], 00:14:52.201 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 9110], 99.95th=[ 9896], 00:14:52.201 | 99.99th=[10945] 00:14:52.201 bw ( KiB/s): min=37848, max=39104, per=99.98%, avg=38562.00, stdev=572.78, samples=4 00:14:52.201 iops : min= 9462, max= 9776, avg=9640.50, stdev=143.20, samples=4 00:14:52.201 lat (msec) : 4=0.07%, 10=99.84%, 20=0.09% 00:14:52.201 cpu : usr=69.78%, sys=22.34%, ctx=5, majf=0, minf=5 00:14:52.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:52.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:52.201 issued rwts: total=19328,19342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:52.201 00:14:52.201 Run status group 0 (all jobs): 00:14:52.201 READ: bw=37.6MiB/s (39.5MB/s), 37.6MiB/s-37.6MiB/s (39.5MB/s-39.5MB/s), io=75.5MiB (79.2MB), run=2006-2006msec 00:14:52.201 WRITE: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=75.6MiB (79.2MB), run=2006-2006msec 00:14:52.201 09:05:28 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.201 09:05:28 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.201 09:05:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:52.201 09:05:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.201 09:05:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:52.201 09:05:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 09:05:28 -- common/autotest_common.sh@1330 -- # shift 00:14:52.201 09:05:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:52.201 09:05:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:52.201 09:05:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:52.201 09:05:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:52.201 09:05:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:52.201 09:05:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:52.201 09:05:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:52.201 09:05:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.201 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:52.201 fio-3.35 00:14:52.201 Starting 1 thread 00:14:54.745 00:14:54.745 test: (groupid=0, jobs=1): err= 0: pid=69683: Sun Nov 17 09:05:31 2024 00:14:54.745 read: IOPS=8379, BW=131MiB/s (137MB/s)(263MiB/2010msec) 00:14:54.745 slat (usec): min=2, max=126, avg= 4.04, stdev= 2.38 00:14:54.745 clat (usec): min=199, max=15975, avg=8241.83, stdev=2507.13 00:14:54.745 lat (usec): min=210, max=15978, avg=8245.87, stdev=2507.34 00:14:54.745 clat percentiles (usec): 00:14:54.745 | 1.00th=[ 4080], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5932], 00:14:54.745 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 7898], 60.00th=[ 8586], 00:14:54.745 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[11863], 95.00th=[12780], 00:14:54.745 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:14:54.745 | 99.99th=[15926] 00:14:54.745 bw ( KiB/s): min=65664, max=78464, per=51.69%, avg=69296.00, stdev=6133.57, samples=4 00:14:54.745 iops : min= 4104, max= 4904, avg=4331.00, stdev=383.35, samples=4 00:14:54.745 write: IOPS=4775, BW=74.6MiB/s (78.2MB/s)(141MiB/1885msec); 0 zone resets 00:14:54.745 slat (usec): min=31, max=200, avg=40.39, stdev= 8.69 00:14:54.745 clat (usec): min=4319, max=22960, avg=12287.73, stdev=2416.58 00:14:54.745 lat (usec): min=4353, max=22999, avg=12328.13, stdev=2418.73 00:14:54.745 clat percentiles (usec): 00:14:54.745 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:14:54.745 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:14:54.745 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15533], 95.00th=[16909], 00:14:54.745 | 99.00th=[19792], 99.50th=[21103], 99.90th=[22414], 99.95th=[22676], 00:14:54.745 | 99.99th=[22938] 00:14:54.745 bw ( KiB/s): min=67520, max=82368, per=94.25%, avg=72016.00, stdev=7027.02, samples=4 00:14:54.745 iops : min= 4220, max= 5148, avg=4501.00, stdev=439.19, samples=4 00:14:54.745 lat (usec) : 250=0.01% 00:14:54.745 lat (msec) : 2=0.01%, 4=0.50%, 10=52.78%, 20=46.45%, 50=0.26% 00:14:54.745 cpu : usr=79.50%, sys=14.58%, ctx=5, majf=0, minf=1 00:14:54.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:54.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.745 issued rwts: total=16843,9002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.745 00:14:54.745 Run status group 0 (all jobs): 00:14:54.745 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2010-2010msec 00:14:54.745 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=141MiB (147MB), run=1885-1885msec 00:14:54.745 09:05:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.745 09:05:31 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:54.745 09:05:31 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:54.745 09:05:31 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:54.745 09:05:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:14:54.745 09:05:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:14:54.745 09:05:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:54.745 09:05:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:14:54.745 09:05:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:55.004 09:05:31 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:14:55.004 09:05:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:55.004 09:05:31 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:55.262 Nvme0n1 00:14:55.262 09:05:31 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:55.521 09:05:32 -- host/fio.sh@53 -- # ls_guid=7eba96fb-50bf-472b-92a6-96b181fc3c1c 00:14:55.521 09:05:32 -- host/fio.sh@54 -- # get_lvs_free_mb 7eba96fb-50bf-472b-92a6-96b181fc3c1c 00:14:55.521 09:05:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7eba96fb-50bf-472b-92a6-96b181fc3c1c 00:14:55.521 09:05:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:55.521 09:05:32 -- common/autotest_common.sh@1355 -- # local fc 00:14:55.521 09:05:32 -- common/autotest_common.sh@1356 -- # local cs 00:14:55.521 09:05:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:55.780 09:05:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:55.780 { 00:14:55.780 "uuid": "7eba96fb-50bf-472b-92a6-96b181fc3c1c", 00:14:55.780 "name": "lvs_0", 00:14:55.780 "base_bdev": "Nvme0n1", 00:14:55.780 "total_data_clusters": 4, 00:14:55.780 "free_clusters": 4, 00:14:55.780 "block_size": 4096, 00:14:55.780 "cluster_size": 1073741824 00:14:55.780 } 00:14:55.780 ]' 00:14:55.780 09:05:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7eba96fb-50bf-472b-92a6-96b181fc3c1c") .free_clusters' 00:14:55.780 09:05:32 -- common/autotest_common.sh@1358 -- # fc=4 00:14:55.780 09:05:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7eba96fb-50bf-472b-92a6-96b181fc3c1c") .cluster_size' 00:14:55.780 09:05:32 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:14:55.780 4096 00:14:55.780 09:05:32 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:14:55.780 09:05:32 -- common/autotest_common.sh@1363 -- # echo 4096 00:14:55.780 09:05:32 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:56.039 e4eb8c67-9e8c-4bf9-90da-4de57ba087a9 00:14:56.039 09:05:32 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:56.298 09:05:33 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:56.601 09:05:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:56.862 09:05:33 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.862 09:05:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.862 09:05:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:56.862 09:05:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.862 09:05:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:56.862 09:05:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.862 09:05:33 -- common/autotest_common.sh@1330 -- # shift 00:14:56.862 09:05:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:56.862 09:05:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:56.862 09:05:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:56.862 09:05:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:56.862 09:05:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:56.862 09:05:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:56.862 09:05:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:56.862 09:05:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.862 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:56.862 fio-3.35 00:14:56.862 Starting 1 thread 00:14:59.393 00:14:59.393 test: (groupid=0, jobs=1): err= 0: pid=69794: Sun Nov 17 09:05:36 2024 00:14:59.393 read: IOPS=6552, BW=25.6MiB/s (26.8MB/s)(51.4MiB/2008msec) 00:14:59.393 slat (nsec): min=1959, max=327362, avg=2741.20, stdev=3909.70 00:14:59.393 clat (usec): min=2944, max=17549, avg=10188.61, stdev=849.48 00:14:59.393 lat (usec): min=2954, max=17552, avg=10191.36, stdev=849.19 00:14:59.393 clat percentiles (usec): 00:14:59.393 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:14:59.393 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:14:59.393 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:14:59.393 | 99.00th=[12125], 99.50th=[12518], 99.90th=[16057], 99.95th=[17171], 00:14:59.393 | 99.99th=[17433] 00:14:59.393 bw ( KiB/s): min=25096, max=26872, per=99.89%, avg=26182.00, stdev=763.33, samples=4 00:14:59.393 iops : min= 6274, max= 6718, avg=6545.50, stdev=190.83, samples=4 00:14:59.393 write: IOPS=6564, BW=25.6MiB/s (26.9MB/s)(51.5MiB/2008msec); 0 zone resets 00:14:59.393 slat (usec): min=2, max=258, avg= 2.84, stdev= 2.84 00:14:59.393 clat (usec): min=2453, max=16960, avg=9256.46, stdev=791.82 00:14:59.394 lat (usec): min=2467, max=16962, avg=9259.29, stdev=791.67 00:14:59.394 clat percentiles (usec): 00:14:59.394 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8586], 00:14:59.394 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:14:59.394 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:14:59.394 | 99.00th=[10945], 99.50th=[11207], 99.90th=[14746], 99.95th=[16057], 00:14:59.394 | 99.99th=[16319] 00:14:59.394 bw ( KiB/s): min=26048, max=26624, per=99.94%, avg=26242.00, stdev=271.56, samples=4 00:14:59.394 iops : min= 6512, max= 6656, avg=6560.50, stdev=67.89, samples=4 00:14:59.394 lat (msec) : 4=0.06%, 10=62.97%, 20=36.97% 00:14:59.394 cpu : usr=70.45%, sys=23.12%, ctx=25, majf=0, minf=14 00:14:59.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:59.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.394 issued rwts: total=13158,13181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.394 00:14:59.394 Run status group 0 (all jobs): 00:14:59.394 READ: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=51.4MiB (53.9MB), run=2008-2008msec 00:14:59.394 WRITE: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.5MiB (54.0MB), run=2008-2008msec 00:14:59.394 09:05:36 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:59.394 09:05:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:59.652 09:05:36 -- host/fio.sh@64 -- # ls_nested_guid=1073a213-5c5c-45d8-83fc-3a162e1501e5 00:14:59.652 09:05:36 -- host/fio.sh@65 -- # get_lvs_free_mb 1073a213-5c5c-45d8-83fc-3a162e1501e5 00:14:59.652 09:05:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=1073a213-5c5c-45d8-83fc-3a162e1501e5 00:14:59.652 09:05:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:59.652 09:05:36 -- common/autotest_common.sh@1355 -- # local fc 00:14:59.652 09:05:36 -- common/autotest_common.sh@1356 -- # local cs 00:14:59.652 09:05:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:59.912 09:05:36 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:59.912 { 00:14:59.912 "uuid": "7eba96fb-50bf-472b-92a6-96b181fc3c1c", 00:14:59.912 "name": "lvs_0", 00:14:59.912 "base_bdev": "Nvme0n1", 00:14:59.912 "total_data_clusters": 4, 00:14:59.912 "free_clusters": 0, 00:14:59.912 "block_size": 4096, 00:14:59.912 "cluster_size": 1073741824 00:14:59.912 }, 00:14:59.912 { 00:14:59.912 "uuid": "1073a213-5c5c-45d8-83fc-3a162e1501e5", 00:14:59.912 "name": "lvs_n_0", 00:14:59.912 "base_bdev": "e4eb8c67-9e8c-4bf9-90da-4de57ba087a9", 00:14:59.912 "total_data_clusters": 1022, 00:14:59.912 "free_clusters": 1022, 00:14:59.912 "block_size": 4096, 00:14:59.912 "cluster_size": 4194304 00:14:59.912 } 00:14:59.912 ]' 00:14:59.912 09:05:36 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="1073a213-5c5c-45d8-83fc-3a162e1501e5") .free_clusters' 00:15:00.171 09:05:36 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:00.171 09:05:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="1073a213-5c5c-45d8-83fc-3a162e1501e5") .cluster_size' 00:15:00.171 09:05:36 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:00.171 09:05:36 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:00.171 4088 00:15:00.171 09:05:36 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:00.171 09:05:36 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:00.430 2c619731-3edf-4f49-9853-944b03b1bc8d 00:15:00.430 09:05:37 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:00.689 09:05:37 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:00.689 09:05:37 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:00.949 09:05:37 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:00.949 09:05:37 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:00.949 09:05:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:00.949 09:05:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.949 09:05:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:00.949 09:05:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.949 09:05:37 -- common/autotest_common.sh@1330 -- # shift 00:15:00.949 09:05:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:00.949 09:05:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:00.949 09:05:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:00.949 09:05:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:00.949 09:05:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:01.207 09:05:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:01.207 09:05:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:01.207 09:05:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:01.207 09:05:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:01.207 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:01.207 fio-3.35 00:15:01.207 Starting 1 thread 00:15:03.738 00:15:03.738 test: (groupid=0, jobs=1): err= 0: pid=69867: Sun Nov 17 09:05:40 2024 00:15:03.738 read: IOPS=5828, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2010msec) 00:15:03.738 slat (nsec): min=1979, max=831492, avg=3084.90, stdev=8714.97 00:15:03.738 clat (usec): min=4100, max=19815, avg=11468.58, stdev=973.84 00:15:03.738 lat (usec): min=4112, max=19818, avg=11471.66, stdev=973.14 00:15:03.738 clat percentiles (usec): 00:15:03.738 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:15:03.738 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:15:03.738 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:15:03.738 | 99.00th=[13698], 99.50th=[14222], 99.90th=[18220], 99.95th=[19268], 00:15:03.738 | 99.99th=[19792] 00:15:03.738 bw ( KiB/s): min=22520, max=23752, per=99.95%, avg=23304.00, stdev=567.19, samples=4 00:15:03.738 iops : min= 5630, max= 5938, avg=5826.00, stdev=141.80, samples=4 00:15:03.738 write: IOPS=5814, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2010msec); 0 zone resets 00:15:03.738 slat (usec): min=2, max=424, avg= 3.17, stdev= 4.50 00:15:03.738 clat (usec): min=3905, max=19755, avg=10411.76, stdev=936.02 00:15:03.738 lat (usec): min=3965, max=19758, avg=10414.93, stdev=935.74 00:15:03.738 clat percentiles (usec): 00:15:03.738 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:15:03.738 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:15:03.738 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:15:03.739 | 99.00th=[12518], 99.50th=[13042], 99.90th=[17957], 99.95th=[19530], 00:15:03.739 | 99.99th=[19530] 00:15:03.739 bw ( KiB/s): min=23040, max=23368, per=99.96%, avg=23250.00, stdev=144.06, samples=4 00:15:03.739 iops : min= 5760, max= 5842, avg=5812.50, stdev=36.01, samples=4 00:15:03.739 lat (msec) : 4=0.01%, 10=18.38%, 20=81.62% 00:15:03.739 cpu : usr=70.38%, sys=23.15%, ctx=30, majf=0, minf=14 00:15:03.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:03.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.739 issued rwts: total=11716,11688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.739 00:15:03.739 Run status group 0 (all jobs): 00:15:03.739 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2010-2010msec 00:15:03.739 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2010-2010msec 00:15:03.739 09:05:40 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:03.739 09:05:40 -- host/fio.sh@74 -- # sync 00:15:03.739 09:05:40 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:03.998 09:05:40 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:04.256 09:05:41 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:04.515 09:05:41 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:04.772 09:05:41 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:05.707 09:05:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:05.707 09:05:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:05.707 09:05:42 -- host/fio.sh@86 -- # nvmftestfini 00:15:05.707 09:05:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.707 09:05:42 -- nvmf/common.sh@116 -- # sync 00:15:05.707 09:05:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.707 09:05:42 -- nvmf/common.sh@119 -- # set +e 00:15:05.707 09:05:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.707 09:05:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.707 rmmod nvme_tcp 00:15:05.707 rmmod nvme_fabrics 00:15:05.707 rmmod nvme_keyring 00:15:05.707 09:05:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.707 09:05:42 -- nvmf/common.sh@123 -- # set -e 00:15:05.707 09:05:42 -- nvmf/common.sh@124 -- # return 0 00:15:05.707 09:05:42 -- nvmf/common.sh@477 -- # '[' -n 69558 ']' 00:15:05.707 09:05:42 -- nvmf/common.sh@478 -- # killprocess 69558 00:15:05.707 09:05:42 -- common/autotest_common.sh@936 -- # '[' -z 69558 ']' 00:15:05.707 09:05:42 -- common/autotest_common.sh@940 -- # kill -0 69558 00:15:05.707 09:05:42 -- common/autotest_common.sh@941 -- # uname 00:15:05.707 09:05:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.707 09:05:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69558 00:15:05.707 killing process with pid 69558 00:15:05.707 09:05:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.707 09:05:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.707 09:05:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69558' 00:15:05.707 09:05:42 -- common/autotest_common.sh@955 -- # kill 69558 00:15:05.707 09:05:42 -- common/autotest_common.sh@960 -- # wait 69558 00:15:05.965 09:05:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.965 09:05:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.965 09:05:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.965 09:05:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.965 09:05:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.965 09:05:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.965 09:05:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.965 09:05:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.965 09:05:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:05.965 00:15:05.965 real 0m19.748s 00:15:05.965 user 1m25.867s 00:15:05.965 sys 0m4.513s 00:15:05.965 09:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:05.965 09:05:42 -- common/autotest_common.sh@10 -- # set +x 00:15:05.965 ************************************ 00:15:05.965 END TEST nvmf_fio_host 00:15:05.965 ************************************ 00:15:06.224 09:05:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:06.224 09:05:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.225 09:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.225 09:05:42 -- common/autotest_common.sh@10 -- # set +x 00:15:06.225 ************************************ 00:15:06.225 START TEST nvmf_failover 00:15:06.225 ************************************ 00:15:06.225 09:05:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:06.225 * Looking for test storage... 00:15:06.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:06.225 09:05:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:06.225 09:05:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:06.225 09:05:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:06.225 09:05:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:06.225 09:05:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:06.225 09:05:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:06.225 09:05:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:06.225 09:05:43 -- scripts/common.sh@335 -- # IFS=.-: 00:15:06.225 09:05:43 -- scripts/common.sh@335 -- # read -ra ver1 00:15:06.225 09:05:43 -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.225 09:05:43 -- scripts/common.sh@336 -- # read -ra ver2 00:15:06.225 09:05:43 -- scripts/common.sh@337 -- # local 'op=<' 00:15:06.225 09:05:43 -- scripts/common.sh@339 -- # ver1_l=2 00:15:06.225 09:05:43 -- scripts/common.sh@340 -- # ver2_l=1 00:15:06.225 09:05:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:06.225 09:05:43 -- scripts/common.sh@343 -- # case "$op" in 00:15:06.225 09:05:43 -- scripts/common.sh@344 -- # : 1 00:15:06.225 09:05:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:06.225 09:05:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.225 09:05:43 -- scripts/common.sh@364 -- # decimal 1 00:15:06.225 09:05:43 -- scripts/common.sh@352 -- # local d=1 00:15:06.225 09:05:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.225 09:05:43 -- scripts/common.sh@354 -- # echo 1 00:15:06.225 09:05:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:06.225 09:05:43 -- scripts/common.sh@365 -- # decimal 2 00:15:06.225 09:05:43 -- scripts/common.sh@352 -- # local d=2 00:15:06.225 09:05:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.225 09:05:43 -- scripts/common.sh@354 -- # echo 2 00:15:06.225 09:05:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:06.225 09:05:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:06.225 09:05:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:06.225 09:05:43 -- scripts/common.sh@367 -- # return 0 00:15:06.225 09:05:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.225 09:05:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.225 --rc genhtml_branch_coverage=1 00:15:06.225 --rc genhtml_function_coverage=1 00:15:06.225 --rc genhtml_legend=1 00:15:06.225 --rc geninfo_all_blocks=1 00:15:06.225 --rc geninfo_unexecuted_blocks=1 00:15:06.225 00:15:06.225 ' 00:15:06.225 09:05:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.225 --rc genhtml_branch_coverage=1 00:15:06.225 --rc genhtml_function_coverage=1 00:15:06.225 --rc genhtml_legend=1 00:15:06.225 --rc geninfo_all_blocks=1 00:15:06.225 --rc geninfo_unexecuted_blocks=1 00:15:06.225 00:15:06.225 ' 00:15:06.225 09:05:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.225 --rc genhtml_branch_coverage=1 00:15:06.225 --rc genhtml_function_coverage=1 00:15:06.225 --rc genhtml_legend=1 00:15:06.225 --rc geninfo_all_blocks=1 00:15:06.225 --rc geninfo_unexecuted_blocks=1 00:15:06.225 00:15:06.225 ' 00:15:06.225 09:05:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:06.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.225 --rc genhtml_branch_coverage=1 00:15:06.225 --rc genhtml_function_coverage=1 00:15:06.225 --rc genhtml_legend=1 00:15:06.225 --rc geninfo_all_blocks=1 00:15:06.225 --rc geninfo_unexecuted_blocks=1 00:15:06.225 00:15:06.225 ' 00:15:06.225 09:05:43 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.225 09:05:43 -- nvmf/common.sh@7 -- # uname -s 00:15:06.225 09:05:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.225 09:05:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.225 09:05:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.225 09:05:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.225 09:05:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.225 09:05:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.225 09:05:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.225 09:05:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.225 09:05:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.225 09:05:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:06.225 09:05:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:06.225 09:05:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.225 09:05:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.225 09:05:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.225 09:05:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.225 09:05:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.225 09:05:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.225 09:05:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.225 09:05:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.225 09:05:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.225 09:05:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.225 09:05:43 -- paths/export.sh@5 -- # export PATH 00:15:06.225 09:05:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.225 09:05:43 -- nvmf/common.sh@46 -- # : 0 00:15:06.225 09:05:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.225 09:05:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.225 09:05:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.225 09:05:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.225 09:05:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.225 09:05:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.225 09:05:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.225 09:05:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.225 09:05:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.225 09:05:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.225 09:05:43 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.225 09:05:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.225 09:05:43 -- host/failover.sh@18 -- # nvmftestinit 00:15:06.225 09:05:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.225 09:05:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.225 09:05:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.225 09:05:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.225 09:05:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.225 09:05:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.225 09:05:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.225 09:05:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.225 09:05:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:06.225 09:05:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:06.225 09:05:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.225 09:05:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.225 09:05:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.225 09:05:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:06.225 09:05:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.225 09:05:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.225 09:05:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.225 09:05:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.225 09:05:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.225 09:05:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.225 09:05:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.225 09:05:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.225 09:05:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:06.226 09:05:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:06.226 Cannot find device "nvmf_tgt_br" 00:15:06.226 09:05:43 -- nvmf/common.sh@154 -- # true 00:15:06.226 09:05:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.226 Cannot find device "nvmf_tgt_br2" 00:15:06.226 09:05:43 -- nvmf/common.sh@155 -- # true 00:15:06.226 09:05:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:06.226 09:05:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:06.484 Cannot find device "nvmf_tgt_br" 00:15:06.484 09:05:43 -- nvmf/common.sh@157 -- # true 00:15:06.484 09:05:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:06.484 Cannot find device "nvmf_tgt_br2" 00:15:06.484 09:05:43 -- nvmf/common.sh@158 -- # true 00:15:06.484 09:05:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:06.484 09:05:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:06.484 09:05:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.484 09:05:43 -- nvmf/common.sh@161 -- # true 00:15:06.484 09:05:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.484 09:05:43 -- nvmf/common.sh@162 -- # true 00:15:06.484 09:05:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.484 09:05:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.484 09:05:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.484 09:05:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.484 09:05:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.484 09:05:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.484 09:05:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.484 09:05:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.484 09:05:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.484 09:05:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:06.484 09:05:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:06.484 09:05:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:06.484 09:05:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:06.484 09:05:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.484 09:05:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.484 09:05:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.484 09:05:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:06.484 09:05:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:06.484 09:05:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.484 09:05:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.484 09:05:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.484 09:05:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.484 09:05:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.484 09:05:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:06.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:06.484 00:15:06.485 --- 10.0.0.2 ping statistics --- 00:15:06.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.485 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:06.485 09:05:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:06.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:06.485 00:15:06.485 --- 10.0.0.3 ping statistics --- 00:15:06.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.485 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:06.485 09:05:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:06.485 00:15:06.485 --- 10.0.0.1 ping statistics --- 00:15:06.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.485 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:06.485 09:05:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.485 09:05:43 -- nvmf/common.sh@421 -- # return 0 00:15:06.485 09:05:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:06.485 09:05:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.485 09:05:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:06.485 09:05:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:06.485 09:05:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.485 09:05:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:06.485 09:05:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:06.485 09:05:43 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:06.485 09:05:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:06.485 09:05:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.485 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:15:06.743 09:05:43 -- nvmf/common.sh@469 -- # nvmfpid=70118 00:15:06.743 09:05:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:06.743 09:05:43 -- nvmf/common.sh@470 -- # waitforlisten 70118 00:15:06.743 09:05:43 -- common/autotest_common.sh@829 -- # '[' -z 70118 ']' 00:15:06.743 09:05:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.743 09:05:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.743 09:05:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.743 09:05:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.743 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:15:06.743 [2024-11-17 09:05:43.466760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:06.743 [2024-11-17 09:05:43.466848] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.743 [2024-11-17 09:05:43.605487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.743 [2024-11-17 09:05:43.657227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:06.743 [2024-11-17 09:05:43.657389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.743 [2024-11-17 09:05:43.657402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.743 [2024-11-17 09:05:43.657410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.743 [2024-11-17 09:05:43.658163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.743 [2024-11-17 09:05:43.658309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.743 [2024-11-17 09:05:43.658329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.678 09:05:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.678 09:05:44 -- common/autotest_common.sh@862 -- # return 0 00:15:07.678 09:05:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.678 09:05:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.678 09:05:44 -- common/autotest_common.sh@10 -- # set +x 00:15:07.678 09:05:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.678 09:05:44 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.936 [2024-11-17 09:05:44.787353] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.936 09:05:44 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:08.195 Malloc0 00:15:08.195 09:05:45 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.761 09:05:45 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.761 09:05:45 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.019 [2024-11-17 09:05:45.933963] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.278 09:05:45 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:09.535 [2024-11-17 09:05:46.210210] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:09.535 09:05:46 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:09.535 [2024-11-17 09:05:46.442435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:09.535 09:05:46 -- host/failover.sh@31 -- # bdevperf_pid=70181 00:15:09.535 09:05:46 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.535 09:05:46 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:09.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.535 09:05:46 -- host/failover.sh@34 -- # waitforlisten 70181 /var/tmp/bdevperf.sock 00:15:09.535 09:05:46 -- common/autotest_common.sh@829 -- # '[' -z 70181 ']' 00:15:09.535 09:05:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.535 09:05:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.535 09:05:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.535 09:05:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.535 09:05:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 09:05:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.929 09:05:47 -- common/autotest_common.sh@862 -- # return 0 00:15:10.929 09:05:47 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:10.929 NVMe0n1 00:15:10.929 09:05:47 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.219 00:15:11.219 09:05:48 -- host/failover.sh@39 -- # run_test_pid=70205 00:15:11.219 09:05:48 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.219 09:05:48 -- host/failover.sh@41 -- # sleep 1 00:15:12.153 09:05:49 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.413 [2024-11-17 09:05:49.290912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.290964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.290991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.291014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.291022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.291029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.291037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 [2024-11-17 09:05:49.291045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6d00 is same with the state(5) to be set 00:15:12.413 09:05:49 -- host/failover.sh@45 -- # sleep 3 00:15:15.690 09:05:52 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.949 00:15:15.949 09:05:52 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:15.949 [2024-11-17 09:05:52.872851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:15.949 [2024-11-17 09:05:52.873397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce73c0 is same with the state(5) to be set 00:15:16.207 09:05:52 -- host/failover.sh@50 -- # sleep 3 00:15:19.489 09:05:55 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.489 [2024-11-17 09:05:56.139452] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.489 09:05:56 -- host/failover.sh@55 -- # sleep 1 00:15:20.422 09:05:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:20.680 [2024-11-17 09:05:57.411551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 [2024-11-17 09:05:57.411708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce59f0 is same with the state(5) to be set 00:15:20.680 09:05:57 -- host/failover.sh@59 -- # wait 70205 00:15:27.246 0 00:15:27.246 09:06:03 -- host/failover.sh@61 -- # killprocess 70181 00:15:27.246 09:06:03 -- common/autotest_common.sh@936 -- # '[' -z 70181 ']' 00:15:27.246 09:06:03 -- common/autotest_common.sh@940 -- # kill -0 70181 00:15:27.246 09:06:03 -- common/autotest_common.sh@941 -- # uname 00:15:27.246 09:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.246 09:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70181 00:15:27.246 killing process with pid 70181 00:15:27.246 09:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.246 09:06:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.246 09:06:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70181' 00:15:27.246 09:06:03 -- common/autotest_common.sh@955 -- # kill 70181 00:15:27.246 09:06:03 -- common/autotest_common.sh@960 -- # wait 70181 00:15:27.246 09:06:03 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:27.246 [2024-11-17 09:05:46.512612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.246 [2024-11-17 09:05:46.512737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70181 ] 00:15:27.246 [2024-11-17 09:05:46.650964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.246 [2024-11-17 09:05:46.719935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.246 Running I/O for 15 seconds... 00:15:27.246 [2024-11-17 09:05:49.291123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.246 [2024-11-17 09:05:49.291449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.246 [2024-11-17 09:05:49.291533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.246 [2024-11-17 09:05:49.291561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.246 [2024-11-17 09:05:49.291589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.246 [2024-11-17 09:05:49.291652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.246 [2024-11-17 09:05:49.291669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.291944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.291976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.291992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.247 [2024-11-17 09:05:49.292896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.247 [2024-11-17 09:05:49.292952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.247 [2024-11-17 09:05:49.292967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.292980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.292995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.293923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.293950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.293975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.294110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.294142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.294246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.248 [2024-11-17 09:05:49.294305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.294333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.248 [2024-11-17 09:05:49.294362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.248 [2024-11-17 09:05:49.294377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.294879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.294969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.295085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.295115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.249 [2024-11-17 09:05:49.295192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.249 [2024-11-17 09:05:49.295500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb970 is same with the state(5) to be set 00:15:27.249 [2024-11-17 09:05:49.295535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.249 [2024-11-17 09:05:49.295546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.249 [2024-11-17 09:05:49.295558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125632 len:8 PRP1 0x0 PRP2 0x0 00:15:27.249 [2024-11-17 09:05:49.295571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295637] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12fb970 was disconnected and freed. reset controller. 00:15:27.249 [2024-11-17 09:05:49.295668] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:27.249 [2024-11-17 09:05:49.295751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.249 [2024-11-17 09:05:49.295783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.249 [2024-11-17 09:05:49.295800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.250 [2024-11-17 09:05:49.295815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:49.295833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.250 [2024-11-17 09:05:49.295847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:49.295862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.250 [2024-11-17 09:05:49.295876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:49.295890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:27.250 [2024-11-17 09:05:49.295960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298690 (9): Bad file descriptor 00:15:27.250 [2024-11-17 09:05:49.298371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:27.250 [2024-11-17 09:05:49.327714] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:27.250 [2024-11-17 09:05:52.873462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.873981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.873997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.250 [2024-11-17 09:05:52.874621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.250 [2024-11-17 09:05:52.874745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.250 [2024-11-17 09:05:52.874762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.250 [2024-11-17 09:05:52.874776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.874838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.874868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.874899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.874930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.874960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.874976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.874990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.251 [2024-11-17 09:05:52.875901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.875948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.251 [2024-11-17 09:05:52.875971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.251 [2024-11-17 09:05:52.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.876928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.876958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.876988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.252 [2024-11-17 09:05:52.877236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.252 [2024-11-17 09:05:52.877373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.252 [2024-11-17 09:05:52.877386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.253 [2024-11-17 09:05:52.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.877974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.877990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.878004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.878034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.878079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:52.878124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e0450 is same with the state(5) to be set 00:15:27.253 [2024-11-17 09:05:52.878181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.253 [2024-11-17 09:05:52.878193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.253 [2024-11-17 09:05:52.878204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127640 len:8 PRP1 0x0 PRP2 0x0 00:15:27.253 [2024-11-17 09:05:52.878218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878264] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e0450 was disconnected and freed. reset controller. 00:15:27.253 [2024-11-17 09:05:52.878283] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:27.253 [2024-11-17 09:05:52.878341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.253 [2024-11-17 09:05:52.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.253 [2024-11-17 09:05:52.878411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.253 [2024-11-17 09:05:52.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.253 [2024-11-17 09:05:52.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:52.878486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:27.253 [2024-11-17 09:05:52.880870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:27.253 [2024-11-17 09:05:52.880917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298690 (9): Bad file descriptor 00:15:27.253 [2024-11-17 09:05:52.908985] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:27.253 [2024-11-17 09:05:57.411767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.411849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.411880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.411908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.411955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.411984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.411997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.412012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.412024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.412039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.412052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.253 [2024-11-17 09:05:57.412066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.253 [2024-11-17 09:05:57.412079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.412945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.412976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.412992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.413162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.413303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.413334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.254 [2024-11-17 09:05:57.413365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.254 [2024-11-17 09:05:57.413381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.254 [2024-11-17 09:05:57.413397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.413972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.413988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.414019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.414065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.414144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.414395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.255 [2024-11-17 09:05:57.414451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.255 [2024-11-17 09:05:57.414556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.255 [2024-11-17 09:05:57.414569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.414776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.414803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.414868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.414981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.414995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:27.256 [2024-11-17 09:05:57.415686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.256 [2024-11-17 09:05:57.415731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.256 [2024-11-17 09:05:57.415745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.257 [2024-11-17 09:05:57.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.257 [2024-11-17 09:05:57.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.257 [2024-11-17 09:05:57.415833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.257 [2024-11-17 09:05:57.415862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.257 [2024-11-17 09:05:57.415890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.415905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d8e0 is same with the state(5) to be set 00:15:27.257 [2024-11-17 09:05:57.415922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:27.257 [2024-11-17 09:05:57.415933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:27.257 [2024-11-17 09:05:57.415944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105592 len:8 PRP1 0x0 PRP2 0x0 00:15:27.257 [2024-11-17 09:05:57.415957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.416002] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x130d8e0 was disconnected and freed. reset controller. 00:15:27.257 [2024-11-17 09:05:57.416028] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:27.257 [2024-11-17 09:05:57.416081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.257 [2024-11-17 09:05:57.416103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.416118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.257 [2024-11-17 09:05:57.416132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.416149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.257 [2024-11-17 09:05:57.416163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.416176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.257 [2024-11-17 09:05:57.416189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.257 [2024-11-17 09:05:57.416203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:27.257 [2024-11-17 09:05:57.418729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:27.257 [2024-11-17 09:05:57.418769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1298690 (9): Bad file descriptor 00:15:27.257 [2024-11-17 09:05:57.447293] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:27.257 00:15:27.257 Latency(us) 00:15:27.257 [2024-11-17T09:06:04.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.257 [2024-11-17T09:06:04.187Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.257 Verification LBA range: start 0x0 length 0x4000 00:15:27.257 NVMe0n1 : 15.01 13430.46 52.46 290.86 0.00 9310.76 463.59 13822.14 00:15:27.257 [2024-11-17T09:06:04.187Z] =================================================================================================================== 00:15:27.257 [2024-11-17T09:06:04.187Z] Total : 13430.46 52.46 290.86 0.00 9310.76 463.59 13822.14 00:15:27.257 Received shutdown signal, test time was about 15.000000 seconds 00:15:27.257 00:15:27.257 Latency(us) 00:15:27.257 [2024-11-17T09:06:04.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.257 [2024-11-17T09:06:04.187Z] =================================================================================================================== 00:15:27.257 [2024-11-17T09:06:04.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.257 09:06:03 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:27.257 09:06:03 -- host/failover.sh@65 -- # count=3 00:15:27.257 09:06:03 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:27.257 09:06:03 -- host/failover.sh@73 -- # bdevperf_pid=70379 00:15:27.257 09:06:03 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:27.257 09:06:03 -- host/failover.sh@75 -- # waitforlisten 70379 /var/tmp/bdevperf.sock 00:15:27.257 09:06:03 -- common/autotest_common.sh@829 -- # '[' -z 70379 ']' 00:15:27.257 09:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.257 09:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.257 09:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.257 09:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.257 09:06:03 -- common/autotest_common.sh@10 -- # set +x 00:15:27.545 09:06:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.545 09:06:04 -- common/autotest_common.sh@862 -- # return 0 00:15:27.545 09:06:04 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:27.802 [2024-11-17 09:06:04.668649] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:27.803 09:06:04 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:28.060 [2024-11-17 09:06:04.940832] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:28.060 09:06:04 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.626 NVMe0n1 00:15:28.626 09:06:05 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.626 00:15:28.884 09:06:05 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.141 00:15:29.141 09:06:05 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:29.141 09:06:05 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.399 09:06:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.656 09:06:06 -- host/failover.sh@87 -- # sleep 3 00:15:32.930 09:06:09 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:32.930 09:06:09 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:32.930 09:06:09 -- host/failover.sh@90 -- # run_test_pid=70462 00:15:32.930 09:06:09 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.930 09:06:09 -- host/failover.sh@92 -- # wait 70462 00:15:34.303 0 00:15:34.303 09:06:10 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:34.303 [2024-11-17 09:06:03.454876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.303 [2024-11-17 09:06:03.455007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70379 ] 00:15:34.303 [2024-11-17 09:06:03.595634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.303 [2024-11-17 09:06:03.651559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.303 [2024-11-17 09:06:06.394415] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:34.303 [2024-11-17 09:06:06.394539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.303 [2024-11-17 09:06:06.394564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.303 [2024-11-17 09:06:06.394582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.303 [2024-11-17 09:06:06.394595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.303 [2024-11-17 09:06:06.394621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.303 [2024-11-17 09:06:06.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.303 [2024-11-17 09:06:06.394651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.303 [2024-11-17 09:06:06.394664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.303 [2024-11-17 09:06:06.394678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:34.303 [2024-11-17 09:06:06.394725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:34.303 [2024-11-17 09:06:06.394756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc7690 (9): Bad file descriptor 00:15:34.303 [2024-11-17 09:06:06.406134] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:34.303 Running I/O for 1 seconds... 00:15:34.303 00:15:34.303 Latency(us) 00:15:34.303 [2024-11-17T09:06:11.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.303 [2024-11-17T09:06:11.233Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:34.303 Verification LBA range: start 0x0 length 0x4000 00:15:34.303 NVMe0n1 : 1.01 13785.09 53.85 0.00 0.00 9237.38 916.01 11021.96 00:15:34.303 [2024-11-17T09:06:11.233Z] =================================================================================================================== 00:15:34.303 [2024-11-17T09:06:11.233Z] Total : 13785.09 53.85 0.00 0.00 9237.38 916.01 11021.96 00:15:34.303 09:06:10 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.303 09:06:10 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:34.303 09:06:11 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.561 09:06:11 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.561 09:06:11 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:34.819 09:06:11 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.076 09:06:11 -- host/failover.sh@101 -- # sleep 3 00:15:38.354 09:06:14 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:38.354 09:06:14 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:38.354 09:06:15 -- host/failover.sh@108 -- # killprocess 70379 00:15:38.355 09:06:15 -- common/autotest_common.sh@936 -- # '[' -z 70379 ']' 00:15:38.355 09:06:15 -- common/autotest_common.sh@940 -- # kill -0 70379 00:15:38.355 09:06:15 -- common/autotest_common.sh@941 -- # uname 00:15:38.355 09:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.355 09:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70379 00:15:38.355 09:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.355 killing process with pid 70379 00:15:38.355 09:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.355 09:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70379' 00:15:38.355 09:06:15 -- common/autotest_common.sh@955 -- # kill 70379 00:15:38.355 09:06:15 -- common/autotest_common.sh@960 -- # wait 70379 00:15:38.613 09:06:15 -- host/failover.sh@110 -- # sync 00:15:38.613 09:06:15 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.871 09:06:15 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:38.871 09:06:15 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:38.871 09:06:15 -- host/failover.sh@116 -- # nvmftestfini 00:15:38.871 09:06:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:38.871 09:06:15 -- nvmf/common.sh@116 -- # sync 00:15:38.871 09:06:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:38.871 09:06:15 -- nvmf/common.sh@119 -- # set +e 00:15:38.871 09:06:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:38.871 09:06:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:38.871 rmmod nvme_tcp 00:15:38.871 rmmod nvme_fabrics 00:15:38.871 rmmod nvme_keyring 00:15:38.871 09:06:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:38.871 09:06:15 -- nvmf/common.sh@123 -- # set -e 00:15:38.871 09:06:15 -- nvmf/common.sh@124 -- # return 0 00:15:38.871 09:06:15 -- nvmf/common.sh@477 -- # '[' -n 70118 ']' 00:15:38.871 09:06:15 -- nvmf/common.sh@478 -- # killprocess 70118 00:15:38.871 09:06:15 -- common/autotest_common.sh@936 -- # '[' -z 70118 ']' 00:15:38.871 09:06:15 -- common/autotest_common.sh@940 -- # kill -0 70118 00:15:38.871 09:06:15 -- common/autotest_common.sh@941 -- # uname 00:15:38.871 09:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.871 09:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70118 00:15:38.871 09:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:38.871 09:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:38.871 killing process with pid 70118 00:15:38.871 09:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70118' 00:15:38.871 09:06:15 -- common/autotest_common.sh@955 -- # kill 70118 00:15:38.871 09:06:15 -- common/autotest_common.sh@960 -- # wait 70118 00:15:39.129 09:06:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.129 09:06:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.129 09:06:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.129 09:06:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.129 09:06:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.129 09:06:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.129 09:06:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.129 09:06:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.129 09:06:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.129 00:15:39.129 real 0m33.116s 00:15:39.129 user 2m8.615s 00:15:39.129 sys 0m5.421s 00:15:39.129 09:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.129 09:06:16 -- common/autotest_common.sh@10 -- # set +x 00:15:39.129 ************************************ 00:15:39.129 END TEST nvmf_failover 00:15:39.129 ************************************ 00:15:39.390 09:06:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.390 09:06:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.390 09:06:16 -- common/autotest_common.sh@10 -- # set +x 00:15:39.390 ************************************ 00:15:39.390 START TEST nvmf_discovery 00:15:39.390 ************************************ 00:15:39.390 09:06:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.390 * Looking for test storage... 00:15:39.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.390 09:06:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.390 09:06:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.390 09:06:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:39.390 09:06:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:39.390 09:06:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:39.390 09:06:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:39.390 09:06:16 -- scripts/common.sh@335 -- # IFS=.-: 00:15:39.390 09:06:16 -- scripts/common.sh@335 -- # read -ra ver1 00:15:39.390 09:06:16 -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.390 09:06:16 -- scripts/common.sh@336 -- # read -ra ver2 00:15:39.390 09:06:16 -- scripts/common.sh@337 -- # local 'op=<' 00:15:39.390 09:06:16 -- scripts/common.sh@339 -- # ver1_l=2 00:15:39.390 09:06:16 -- scripts/common.sh@340 -- # ver2_l=1 00:15:39.390 09:06:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:39.390 09:06:16 -- scripts/common.sh@343 -- # case "$op" in 00:15:39.390 09:06:16 -- scripts/common.sh@344 -- # : 1 00:15:39.390 09:06:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:39.390 09:06:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.390 09:06:16 -- scripts/common.sh@364 -- # decimal 1 00:15:39.390 09:06:16 -- scripts/common.sh@352 -- # local d=1 00:15:39.390 09:06:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.390 09:06:16 -- scripts/common.sh@354 -- # echo 1 00:15:39.390 09:06:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:39.390 09:06:16 -- scripts/common.sh@365 -- # decimal 2 00:15:39.390 09:06:16 -- scripts/common.sh@352 -- # local d=2 00:15:39.390 09:06:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.390 09:06:16 -- scripts/common.sh@354 -- # echo 2 00:15:39.390 09:06:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:39.390 09:06:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:39.390 09:06:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:39.390 09:06:16 -- scripts/common.sh@367 -- # return 0 00:15:39.390 09:06:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.390 --rc genhtml_branch_coverage=1 00:15:39.390 --rc genhtml_function_coverage=1 00:15:39.390 --rc genhtml_legend=1 00:15:39.390 --rc geninfo_all_blocks=1 00:15:39.390 --rc geninfo_unexecuted_blocks=1 00:15:39.390 00:15:39.390 ' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.390 --rc genhtml_branch_coverage=1 00:15:39.390 --rc genhtml_function_coverage=1 00:15:39.390 --rc genhtml_legend=1 00:15:39.390 --rc geninfo_all_blocks=1 00:15:39.390 --rc geninfo_unexecuted_blocks=1 00:15:39.390 00:15:39.390 ' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.390 --rc genhtml_branch_coverage=1 00:15:39.390 --rc genhtml_function_coverage=1 00:15:39.390 --rc genhtml_legend=1 00:15:39.390 --rc geninfo_all_blocks=1 00:15:39.390 --rc geninfo_unexecuted_blocks=1 00:15:39.390 00:15:39.390 ' 00:15:39.390 09:06:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:39.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.390 --rc genhtml_branch_coverage=1 00:15:39.390 --rc genhtml_function_coverage=1 00:15:39.390 --rc genhtml_legend=1 00:15:39.390 --rc geninfo_all_blocks=1 00:15:39.390 --rc geninfo_unexecuted_blocks=1 00:15:39.390 00:15:39.390 ' 00:15:39.390 09:06:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.390 09:06:16 -- nvmf/common.sh@7 -- # uname -s 00:15:39.390 09:06:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.390 09:06:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.390 09:06:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.390 09:06:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.390 09:06:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.390 09:06:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.390 09:06:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.390 09:06:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.390 09:06:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.390 09:06:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:39.390 09:06:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:39.390 09:06:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.390 09:06:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.390 09:06:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.390 09:06:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.390 09:06:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.390 09:06:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.390 09:06:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.390 09:06:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.390 09:06:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.390 09:06:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.390 09:06:16 -- paths/export.sh@5 -- # export PATH 00:15:39.390 09:06:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.390 09:06:16 -- nvmf/common.sh@46 -- # : 0 00:15:39.390 09:06:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.390 09:06:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.390 09:06:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.390 09:06:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.390 09:06:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.390 09:06:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.390 09:06:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.390 09:06:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.390 09:06:16 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:39.390 09:06:16 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:39.390 09:06:16 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:39.390 09:06:16 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:39.390 09:06:16 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:39.390 09:06:16 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:39.390 09:06:16 -- host/discovery.sh@25 -- # nvmftestinit 00:15:39.390 09:06:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.390 09:06:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.390 09:06:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.390 09:06:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.390 09:06:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.390 09:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.390 09:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.390 09:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.390 09:06:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.390 09:06:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.390 09:06:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.390 09:06:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.390 09:06:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.391 09:06:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.391 09:06:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.391 09:06:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.391 09:06:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.391 09:06:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.391 09:06:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.391 09:06:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.391 09:06:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.391 09:06:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.391 09:06:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.391 09:06:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.391 Cannot find device "nvmf_tgt_br" 00:15:39.391 09:06:16 -- nvmf/common.sh@154 -- # true 00:15:39.391 09:06:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.391 Cannot find device "nvmf_tgt_br2" 00:15:39.391 09:06:16 -- nvmf/common.sh@155 -- # true 00:15:39.391 09:06:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.391 09:06:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.391 Cannot find device "nvmf_tgt_br" 00:15:39.391 09:06:16 -- nvmf/common.sh@157 -- # true 00:15:39.391 09:06:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.391 Cannot find device "nvmf_tgt_br2" 00:15:39.391 09:06:16 -- nvmf/common.sh@158 -- # true 00:15:39.391 09:06:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:39.650 09:06:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:39.650 09:06:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.650 09:06:16 -- nvmf/common.sh@161 -- # true 00:15:39.650 09:06:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.650 09:06:16 -- nvmf/common.sh@162 -- # true 00:15:39.650 09:06:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.650 09:06:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.650 09:06:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.650 09:06:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.650 09:06:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.650 09:06:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.650 09:06:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.650 09:06:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.650 09:06:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.650 09:06:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:39.650 09:06:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:39.650 09:06:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:39.650 09:06:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:39.650 09:06:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.650 09:06:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.650 09:06:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.650 09:06:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:39.650 09:06:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:39.650 09:06:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.650 09:06:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.650 09:06:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.650 09:06:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.650 09:06:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.650 09:06:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:39.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:39.650 00:15:39.650 --- 10.0.0.2 ping statistics --- 00:15:39.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.650 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:39.650 09:06:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:39.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:39.650 00:15:39.650 --- 10.0.0.3 ping statistics --- 00:15:39.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.650 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:39.650 09:06:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:39.650 00:15:39.650 --- 10.0.0.1 ping statistics --- 00:15:39.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.650 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:39.650 09:06:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.650 09:06:16 -- nvmf/common.sh@421 -- # return 0 00:15:39.650 09:06:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.650 09:06:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.650 09:06:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:39.650 09:06:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:39.650 09:06:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.650 09:06:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:39.650 09:06:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:39.650 09:06:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:39.651 09:06:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.651 09:06:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.651 09:06:16 -- common/autotest_common.sh@10 -- # set +x 00:15:39.651 09:06:16 -- nvmf/common.sh@469 -- # nvmfpid=70735 00:15:39.651 09:06:16 -- nvmf/common.sh@470 -- # waitforlisten 70735 00:15:39.651 09:06:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.651 09:06:16 -- common/autotest_common.sh@829 -- # '[' -z 70735 ']' 00:15:39.651 09:06:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.651 09:06:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.651 09:06:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.651 09:06:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.651 09:06:16 -- common/autotest_common.sh@10 -- # set +x 00:15:39.909 [2024-11-17 09:06:16.608620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:39.909 [2024-11-17 09:06:16.608755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.909 [2024-11-17 09:06:16.741301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.909 [2024-11-17 09:06:16.790861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:39.909 [2024-11-17 09:06:16.791041] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.909 [2024-11-17 09:06:16.791068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.909 [2024-11-17 09:06:16.791076] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.909 [2024-11-17 09:06:16.791105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.847 09:06:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.847 09:06:17 -- common/autotest_common.sh@862 -- # return 0 00:15:40.847 09:06:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.847 09:06:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 09:06:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.847 09:06:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.847 09:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 [2024-11-17 09:06:17.662331] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.847 09:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.847 09:06:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:40.847 09:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 [2024-11-17 09:06:17.670488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:40.847 09:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.847 09:06:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:40.847 09:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 null0 00:15:40.847 09:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.847 09:06:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:40.847 09:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 null1 00:15:40.847 09:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.847 09:06:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:40.847 09:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.847 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.847 09:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.847 09:06:17 -- host/discovery.sh@45 -- # hostpid=70767 00:15:40.847 09:06:17 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:40.847 09:06:17 -- host/discovery.sh@46 -- # waitforlisten 70767 /tmp/host.sock 00:15:40.847 09:06:17 -- common/autotest_common.sh@829 -- # '[' -z 70767 ']' 00:15:40.848 09:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:40.848 09:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.848 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:40.848 09:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:40.848 09:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.848 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:40.848 [2024-11-17 09:06:17.757852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:40.848 [2024-11-17 09:06:17.757938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70767 ] 00:15:41.106 [2024-11-17 09:06:17.901013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.106 [2024-11-17 09:06:17.969625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.106 [2024-11-17 09:06:17.969866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.053 09:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.053 09:06:18 -- common/autotest_common.sh@862 -- # return 0 00:15:42.054 09:06:18 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.054 09:06:18 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@72 -- # notify_id=0 00:15:42.054 09:06:18 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # xargs 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # sort 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:42.054 09:06:18 -- host/discovery.sh@79 -- # get_bdev_list 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # sort 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # xargs 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:42.054 09:06:18 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # sort 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- host/discovery.sh@59 -- # xargs 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:42.054 09:06:18 -- host/discovery.sh@83 -- # get_bdev_list 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # sort 00:15:42.054 09:06:18 -- host/discovery.sh@55 -- # xargs 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.054 09:06:18 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:42.054 09:06:18 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:42.054 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.054 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.054 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:18 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:42.318 09:06:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.318 09:06:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.318 09:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 09:06:18 -- host/discovery.sh@59 -- # sort 00:15:42.318 09:06:18 -- host/discovery.sh@59 -- # xargs 00:15:42.318 09:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:42.318 09:06:19 -- host/discovery.sh@87 -- # get_bdev_list 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.318 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # sort 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # xargs 00:15:42.318 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:42.318 09:06:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:42.318 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 [2024-11-17 09:06:19.090952] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.318 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:42.318 09:06:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.318 09:06:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.318 09:06:19 -- host/discovery.sh@59 -- # sort 00:15:42.318 09:06:19 -- host/discovery.sh@59 -- # xargs 00:15:42.318 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:42.318 09:06:19 -- host/discovery.sh@93 -- # get_bdev_list 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.318 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # sort 00:15:42.318 09:06:19 -- host/discovery.sh@55 -- # xargs 00:15:42.318 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.318 09:06:19 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:42.318 09:06:19 -- host/discovery.sh@94 -- # get_notification_count 00:15:42.318 09:06:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:42.318 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.318 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 09:06:19 -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.318 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.576 09:06:19 -- host/discovery.sh@74 -- # notification_count=0 00:15:42.576 09:06:19 -- host/discovery.sh@75 -- # notify_id=0 00:15:42.576 09:06:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:42.576 09:06:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:42.576 09:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.576 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.576 09:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.576 09:06:19 -- host/discovery.sh@100 -- # sleep 1 00:15:42.835 [2024-11-17 09:06:19.735350] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:42.835 [2024-11-17 09:06:19.735416] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:42.835 [2024-11-17 09:06:19.735436] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:42.835 [2024-11-17 09:06:19.741390] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:43.094 [2024-11-17 09:06:19.796978] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:43.094 [2024-11-17 09:06:19.797010] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:43.352 09:06:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:43.352 09:06:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:43.352 09:06:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:43.352 09:06:20 -- host/discovery.sh@59 -- # sort 00:15:43.352 09:06:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.352 09:06:20 -- host/discovery.sh@59 -- # xargs 00:15:43.352 09:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 09:06:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@102 -- # get_bdev_list 00:15:43.611 09:06:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:43.611 09:06:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.611 09:06:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 09:06:20 -- host/discovery.sh@55 -- # xargs 00:15:43.611 09:06:20 -- host/discovery.sh@55 -- # sort 00:15:43.611 09:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 09:06:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:43.611 09:06:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:43.611 09:06:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:43.611 09:06:20 -- host/discovery.sh@63 -- # sort -n 00:15:43.611 09:06:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 09:06:20 -- host/discovery.sh@63 -- # xargs 00:15:43.611 09:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 09:06:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@104 -- # get_notification_count 00:15:43.611 09:06:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:43.611 09:06:20 -- host/discovery.sh@74 -- # jq '. | length' 00:15:43.611 09:06:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 09:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 09:06:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@74 -- # notification_count=1 00:15:43.611 09:06:20 -- host/discovery.sh@75 -- # notify_id=1 00:15:43.611 09:06:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:43.611 09:06:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 09:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 09:06:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 09:06:20 -- host/discovery.sh@109 -- # sleep 1 00:15:44.988 09:06:21 -- host/discovery.sh@110 -- # get_bdev_list 00:15:44.988 09:06:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.988 09:06:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.988 09:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.988 09:06:21 -- common/autotest_common.sh@10 -- # set +x 00:15:44.988 09:06:21 -- host/discovery.sh@55 -- # sort 00:15:44.988 09:06:21 -- host/discovery.sh@55 -- # xargs 00:15:44.988 09:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.988 09:06:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.988 09:06:21 -- host/discovery.sh@111 -- # get_notification_count 00:15:44.988 09:06:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:44.988 09:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.988 09:06:21 -- common/autotest_common.sh@10 -- # set +x 00:15:44.988 09:06:21 -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.988 09:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.988 09:06:21 -- host/discovery.sh@74 -- # notification_count=1 00:15:44.988 09:06:21 -- host/discovery.sh@75 -- # notify_id=2 00:15:44.988 09:06:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:44.988 09:06:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:44.988 09:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.988 09:06:21 -- common/autotest_common.sh@10 -- # set +x 00:15:44.988 [2024-11-17 09:06:21.609528] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:44.988 [2024-11-17 09:06:21.610754] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:44.988 [2024-11-17 09:06:21.610808] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:44.988 09:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.988 09:06:21 -- host/discovery.sh@117 -- # sleep 1 00:15:44.988 [2024-11-17 09:06:21.616745] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:44.988 [2024-11-17 09:06:21.678055] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:44.988 [2024-11-17 09:06:21.678108] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:44.988 [2024-11-17 09:06:21.678130] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:45.925 09:06:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:45.925 09:06:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.925 09:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.925 09:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:45.925 09:06:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.925 09:06:22 -- host/discovery.sh@59 -- # sort 00:15:45.925 09:06:22 -- host/discovery.sh@59 -- # xargs 00:15:45.925 09:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@119 -- # get_bdev_list 00:15:45.925 09:06:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.925 09:06:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.925 09:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.925 09:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:45.925 09:06:22 -- host/discovery.sh@55 -- # sort 00:15:45.925 09:06:22 -- host/discovery.sh@55 -- # xargs 00:15:45.925 09:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:45.925 09:06:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:45.925 09:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.925 09:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:45.925 09:06:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:45.925 09:06:22 -- host/discovery.sh@63 -- # xargs 00:15:45.925 09:06:22 -- host/discovery.sh@63 -- # sort -n 00:15:45.925 09:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@121 -- # get_notification_count 00:15:45.925 09:06:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:45.925 09:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.925 09:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:45.925 09:06:22 -- host/discovery.sh@74 -- # jq '. | length' 00:15:45.925 09:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@74 -- # notification_count=0 00:15:45.925 09:06:22 -- host/discovery.sh@75 -- # notify_id=2 00:15:45.925 09:06:22 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:45.925 09:06:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.925 09:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:45.925 [2024-11-17 09:06:22.836451] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:45.925 [2024-11-17 09:06:22.836486] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:45.925 09:06:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.925 09:06:22 -- host/discovery.sh@127 -- # sleep 1 00:15:45.925 [2024-11-17 09:06:22.842440] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:45.925 [2024-11-17 09:06:22.842470] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:45.925 [2024-11-17 09:06:22.842564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.925 [2024-11-17 09:06:22.842617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.925 [2024-11-17 09:06:22.842648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.925 [2024-11-17 09:06:22.842657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.925 [2024-11-17 09:06:22.842667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.925 [2024-11-17 09:06:22.842676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.925 [2024-11-17 09:06:22.842686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.925 [2024-11-17 09:06:22.842694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.925 [2024-11-17 09:06:22.842703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x514c10 is same with the state(5) to be set 00:15:47.305 09:06:23 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:47.305 09:06:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:47.305 09:06:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:47.305 09:06:23 -- host/discovery.sh@59 -- # sort 00:15:47.305 09:06:23 -- host/discovery.sh@59 -- # xargs 00:15:47.305 09:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.305 09:06:23 -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 09:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:23 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.305 09:06:23 -- host/discovery.sh@129 -- # get_bdev_list 00:15:47.305 09:06:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.305 09:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.305 09:06:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:47.305 09:06:23 -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 09:06:23 -- host/discovery.sh@55 -- # sort 00:15:47.305 09:06:23 -- host/discovery.sh@55 -- # xargs 00:15:47.305 09:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:23 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:47.305 09:06:23 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:47.305 09:06:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:47.305 09:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.305 09:06:23 -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 09:06:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:47.305 09:06:23 -- host/discovery.sh@63 -- # sort -n 00:15:47.305 09:06:23 -- host/discovery.sh@63 -- # xargs 00:15:47.305 09:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:47.305 09:06:24 -- host/discovery.sh@131 -- # get_notification_count 00:15:47.305 09:06:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:47.305 09:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.305 09:06:24 -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 09:06:24 -- host/discovery.sh@74 -- # jq '. | length' 00:15:47.305 09:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:24 -- host/discovery.sh@74 -- # notification_count=0 00:15:47.305 09:06:24 -- host/discovery.sh@75 -- # notify_id=2 00:15:47.305 09:06:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:47.305 09:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.305 09:06:24 -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 09:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.305 09:06:24 -- host/discovery.sh@135 -- # sleep 1 00:15:48.242 09:06:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:48.242 09:06:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:48.242 09:06:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:48.242 09:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.242 09:06:25 -- host/discovery.sh@59 -- # sort 00:15:48.242 09:06:25 -- host/discovery.sh@59 -- # xargs 00:15:48.242 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.242 09:06:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.242 09:06:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:48.242 09:06:25 -- host/discovery.sh@137 -- # get_bdev_list 00:15:48.242 09:06:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.242 09:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.242 09:06:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:48.242 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.242 09:06:25 -- host/discovery.sh@55 -- # sort 00:15:48.242 09:06:25 -- host/discovery.sh@55 -- # xargs 00:15:48.242 09:06:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.501 09:06:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:48.501 09:06:25 -- host/discovery.sh@138 -- # get_notification_count 00:15:48.501 09:06:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:48.501 09:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.501 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.501 09:06:25 -- host/discovery.sh@74 -- # jq '. | length' 00:15:48.501 09:06:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.501 09:06:25 -- host/discovery.sh@74 -- # notification_count=2 00:15:48.501 09:06:25 -- host/discovery.sh@75 -- # notify_id=4 00:15:48.501 09:06:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:48.501 09:06:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:48.501 09:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.501 09:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:49.437 [2024-11-17 09:06:26.257798] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:49.437 [2024-11-17 09:06:26.257836] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:49.437 [2024-11-17 09:06:26.257855] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:49.437 [2024-11-17 09:06:26.263821] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:49.437 [2024-11-17 09:06:26.323056] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:49.437 [2024-11-17 09:06:26.323095] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:49.437 09:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.437 09:06:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.437 09:06:26 -- common/autotest_common.sh@650 -- # local es=0 00:15:49.438 09:06:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.438 09:06:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.438 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.438 09:06:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.438 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.438 09:06:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.438 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.438 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.438 request: 00:15:49.438 { 00:15:49.438 "name": "nvme", 00:15:49.438 "trtype": "tcp", 00:15:49.438 "traddr": "10.0.0.2", 00:15:49.438 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:49.438 "adrfam": "ipv4", 00:15:49.438 "trsvcid": "8009", 00:15:49.438 "wait_for_attach": true, 00:15:49.438 "method": "bdev_nvme_start_discovery", 00:15:49.438 "req_id": 1 00:15:49.438 } 00:15:49.438 Got JSON-RPC error response 00:15:49.438 response: 00:15:49.438 { 00:15:49.438 "code": -17, 00:15:49.438 "message": "File exists" 00:15:49.438 } 00:15:49.438 09:06:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:49.438 09:06:26 -- common/autotest_common.sh@653 -- # es=1 00:15:49.438 09:06:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.438 09:06:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.438 09:06:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.438 09:06:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:49.438 09:06:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:49.438 09:06:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:49.438 09:06:26 -- host/discovery.sh@67 -- # sort 00:15:49.438 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.438 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.438 09:06:26 -- host/discovery.sh@67 -- # xargs 00:15:49.438 09:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:49.697 09:06:26 -- host/discovery.sh@147 -- # get_bdev_list 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:49.697 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # sort 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # xargs 00:15:49.697 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.697 09:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.697 09:06:26 -- common/autotest_common.sh@650 -- # local es=0 00:15:49.697 09:06:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.697 09:06:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.697 09:06:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.697 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.697 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.697 request: 00:15:49.697 { 00:15:49.697 "name": "nvme_second", 00:15:49.697 "trtype": "tcp", 00:15:49.697 "traddr": "10.0.0.2", 00:15:49.697 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:49.697 "adrfam": "ipv4", 00:15:49.697 "trsvcid": "8009", 00:15:49.697 "wait_for_attach": true, 00:15:49.697 "method": "bdev_nvme_start_discovery", 00:15:49.697 "req_id": 1 00:15:49.697 } 00:15:49.697 Got JSON-RPC error response 00:15:49.697 response: 00:15:49.697 { 00:15:49.697 "code": -17, 00:15:49.697 "message": "File exists" 00:15:49.697 } 00:15:49.697 09:06:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:49.697 09:06:26 -- common/autotest_common.sh@653 -- # es=1 00:15:49.697 09:06:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.697 09:06:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.697 09:06:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.697 09:06:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:49.697 09:06:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:49.697 09:06:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:49.697 09:06:26 -- host/discovery.sh@67 -- # sort 00:15:49.697 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.697 09:06:26 -- host/discovery.sh@67 -- # xargs 00:15:49.697 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.697 09:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:49.697 09:06:26 -- host/discovery.sh@153 -- # get_bdev_list 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # sort 00:15:49.697 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.697 09:06:26 -- host/discovery.sh@55 -- # xargs 00:15:49.697 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.697 09:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:49.697 09:06:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.697 09:06:26 -- common/autotest_common.sh@650 -- # local es=0 00:15:49.697 09:06:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.697 09:06:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.697 09:06:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.697 09:06:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.697 09:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.697 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:51.074 [2024-11-17 09:06:27.597197] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:51.074 [2024-11-17 09:06:27.597339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:51.074 [2024-11-17 09:06:27.597383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:51.074 [2024-11-17 09:06:27.597399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x566270 with addr=10.0.0.2, port=8010 00:15:51.074 [2024-11-17 09:06:27.597418] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:51.074 [2024-11-17 09:06:27.597427] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:51.074 [2024-11-17 09:06:27.597435] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:52.011 [2024-11-17 09:06:28.597191] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:52.011 [2024-11-17 09:06:28.597308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:52.011 [2024-11-17 09:06:28.597349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:52.011 [2024-11-17 09:06:28.597364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x566270 with addr=10.0.0.2, port=8010 00:15:52.011 [2024-11-17 09:06:28.597384] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:52.011 [2024-11-17 09:06:28.597393] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:52.011 [2024-11-17 09:06:28.597402] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:52.949 [2024-11-17 09:06:29.597046] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:52.949 request: 00:15:52.949 { 00:15:52.949 "name": "nvme_second", 00:15:52.949 "trtype": "tcp", 00:15:52.949 "traddr": "10.0.0.2", 00:15:52.949 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:52.949 "adrfam": "ipv4", 00:15:52.949 "trsvcid": "8010", 00:15:52.949 "attach_timeout_ms": 3000, 00:15:52.949 "method": "bdev_nvme_start_discovery", 00:15:52.949 "req_id": 1 00:15:52.949 } 00:15:52.949 Got JSON-RPC error response 00:15:52.949 response: 00:15:52.949 { 00:15:52.949 "code": -110, 00:15:52.949 "message": "Connection timed out" 00:15:52.949 } 00:15:52.949 09:06:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:52.949 09:06:29 -- common/autotest_common.sh@653 -- # es=1 00:15:52.949 09:06:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:52.949 09:06:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:52.949 09:06:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:52.949 09:06:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:52.949 09:06:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:52.949 09:06:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:52.949 09:06:29 -- host/discovery.sh@67 -- # sort 00:15:52.949 09:06:29 -- host/discovery.sh@67 -- # xargs 00:15:52.949 09:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.949 09:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.949 09:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.949 09:06:29 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:52.949 09:06:29 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:52.949 09:06:29 -- host/discovery.sh@162 -- # kill 70767 00:15:52.949 09:06:29 -- host/discovery.sh@163 -- # nvmftestfini 00:15:52.949 09:06:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:52.949 09:06:29 -- nvmf/common.sh@116 -- # sync 00:15:52.949 09:06:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.949 09:06:29 -- nvmf/common.sh@119 -- # set +e 00:15:52.949 09:06:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.949 09:06:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.949 rmmod nvme_tcp 00:15:52.949 rmmod nvme_fabrics 00:15:52.949 rmmod nvme_keyring 00:15:52.949 09:06:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.949 09:06:29 -- nvmf/common.sh@123 -- # set -e 00:15:52.949 09:06:29 -- nvmf/common.sh@124 -- # return 0 00:15:52.949 09:06:29 -- nvmf/common.sh@477 -- # '[' -n 70735 ']' 00:15:52.949 09:06:29 -- nvmf/common.sh@478 -- # killprocess 70735 00:15:52.949 09:06:29 -- common/autotest_common.sh@936 -- # '[' -z 70735 ']' 00:15:52.949 09:06:29 -- common/autotest_common.sh@940 -- # kill -0 70735 00:15:52.949 09:06:29 -- common/autotest_common.sh@941 -- # uname 00:15:52.949 09:06:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.949 09:06:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70735 00:15:52.949 killing process with pid 70735 00:15:52.949 09:06:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.949 09:06:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.949 09:06:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70735' 00:15:52.949 09:06:29 -- common/autotest_common.sh@955 -- # kill 70735 00:15:52.949 09:06:29 -- common/autotest_common.sh@960 -- # wait 70735 00:15:53.208 09:06:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:53.208 09:06:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:53.208 09:06:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:53.208 09:06:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.208 09:06:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:53.208 09:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.208 09:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.208 09:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.208 09:06:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:53.208 00:15:53.208 real 0m13.951s 00:15:53.208 user 0m26.868s 00:15:53.208 sys 0m2.181s 00:15:53.208 09:06:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.208 ************************************ 00:15:53.208 END TEST nvmf_discovery 00:15:53.208 ************************************ 00:15:53.208 09:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:53.208 09:06:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:53.208 09:06:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.208 09:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.208 09:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:53.208 ************************************ 00:15:53.208 START TEST nvmf_discovery_remove_ifc 00:15:53.208 ************************************ 00:15:53.208 09:06:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:53.469 * Looking for test storage... 00:15:53.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:53.469 09:06:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:53.469 09:06:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:53.469 09:06:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:53.469 09:06:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:53.469 09:06:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:53.469 09:06:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:53.469 09:06:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:53.469 09:06:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:53.469 09:06:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:53.469 09:06:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.469 09:06:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:53.469 09:06:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:53.469 09:06:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:53.469 09:06:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:53.469 09:06:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:53.469 09:06:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:53.469 09:06:30 -- scripts/common.sh@344 -- # : 1 00:15:53.469 09:06:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:53.469 09:06:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.469 09:06:30 -- scripts/common.sh@364 -- # decimal 1 00:15:53.469 09:06:30 -- scripts/common.sh@352 -- # local d=1 00:15:53.469 09:06:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.469 09:06:30 -- scripts/common.sh@354 -- # echo 1 00:15:53.469 09:06:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:53.469 09:06:30 -- scripts/common.sh@365 -- # decimal 2 00:15:53.469 09:06:30 -- scripts/common.sh@352 -- # local d=2 00:15:53.469 09:06:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.469 09:06:30 -- scripts/common.sh@354 -- # echo 2 00:15:53.469 09:06:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:53.469 09:06:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:53.469 09:06:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:53.469 09:06:30 -- scripts/common.sh@367 -- # return 0 00:15:53.469 09:06:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.469 09:06:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:53.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.469 --rc genhtml_branch_coverage=1 00:15:53.469 --rc genhtml_function_coverage=1 00:15:53.469 --rc genhtml_legend=1 00:15:53.469 --rc geninfo_all_blocks=1 00:15:53.469 --rc geninfo_unexecuted_blocks=1 00:15:53.469 00:15:53.469 ' 00:15:53.469 09:06:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:53.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.469 --rc genhtml_branch_coverage=1 00:15:53.469 --rc genhtml_function_coverage=1 00:15:53.469 --rc genhtml_legend=1 00:15:53.469 --rc geninfo_all_blocks=1 00:15:53.469 --rc geninfo_unexecuted_blocks=1 00:15:53.469 00:15:53.469 ' 00:15:53.469 09:06:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:53.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.469 --rc genhtml_branch_coverage=1 00:15:53.469 --rc genhtml_function_coverage=1 00:15:53.469 --rc genhtml_legend=1 00:15:53.469 --rc geninfo_all_blocks=1 00:15:53.469 --rc geninfo_unexecuted_blocks=1 00:15:53.469 00:15:53.469 ' 00:15:53.469 09:06:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:53.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.469 --rc genhtml_branch_coverage=1 00:15:53.469 --rc genhtml_function_coverage=1 00:15:53.469 --rc genhtml_legend=1 00:15:53.469 --rc geninfo_all_blocks=1 00:15:53.469 --rc geninfo_unexecuted_blocks=1 00:15:53.469 00:15:53.469 ' 00:15:53.469 09:06:30 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.469 09:06:30 -- nvmf/common.sh@7 -- # uname -s 00:15:53.469 09:06:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.469 09:06:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.469 09:06:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.469 09:06:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.469 09:06:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.469 09:06:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.469 09:06:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.469 09:06:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.469 09:06:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.469 09:06:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.469 09:06:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:53.469 09:06:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:15:53.469 09:06:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.469 09:06:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.469 09:06:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.469 09:06:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.469 09:06:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.469 09:06:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.469 09:06:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.469 09:06:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.469 09:06:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.470 09:06:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.470 09:06:30 -- paths/export.sh@5 -- # export PATH 00:15:53.470 09:06:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.470 09:06:30 -- nvmf/common.sh@46 -- # : 0 00:15:53.470 09:06:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.470 09:06:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.470 09:06:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.470 09:06:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.470 09:06:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.470 09:06:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.470 09:06:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.470 09:06:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:53.470 09:06:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:53.470 09:06:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:53.470 09:06:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.470 09:06:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:53.470 09:06:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:53.470 09:06:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:53.470 09:06:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.470 09:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.470 09:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.470 09:06:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:53.470 09:06:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:53.470 09:06:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:53.470 09:06:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:53.470 09:06:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:53.470 09:06:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:53.470 09:06:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.470 09:06:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.470 09:06:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.470 09:06:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:53.470 09:06:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.470 09:06:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.470 09:06:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.470 09:06:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.470 09:06:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.470 09:06:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.470 09:06:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.470 09:06:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.470 09:06:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:53.470 09:06:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:53.470 Cannot find device "nvmf_tgt_br" 00:15:53.470 09:06:30 -- nvmf/common.sh@154 -- # true 00:15:53.470 09:06:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.470 Cannot find device "nvmf_tgt_br2" 00:15:53.470 09:06:30 -- nvmf/common.sh@155 -- # true 00:15:53.470 09:06:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:53.470 09:06:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:53.470 Cannot find device "nvmf_tgt_br" 00:15:53.470 09:06:30 -- nvmf/common.sh@157 -- # true 00:15:53.470 09:06:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:53.470 Cannot find device "nvmf_tgt_br2" 00:15:53.470 09:06:30 -- nvmf/common.sh@158 -- # true 00:15:53.470 09:06:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:53.730 09:06:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:53.730 09:06:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.730 09:06:30 -- nvmf/common.sh@161 -- # true 00:15:53.730 09:06:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.730 09:06:30 -- nvmf/common.sh@162 -- # true 00:15:53.730 09:06:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.730 09:06:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.730 09:06:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.730 09:06:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.730 09:06:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.730 09:06:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.730 09:06:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.730 09:06:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.730 09:06:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.730 09:06:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:53.730 09:06:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:53.730 09:06:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:53.730 09:06:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:53.730 09:06:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.730 09:06:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.730 09:06:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.730 09:06:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:53.730 09:06:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:53.730 09:06:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.730 09:06:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.730 09:06:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.730 09:06:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.730 09:06:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.730 09:06:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:53.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:15:53.730 00:15:53.730 --- 10.0.0.2 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:53.730 09:06:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:53.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:53.730 00:15:53.730 --- 10.0.0.3 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:53.730 09:06:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:53.730 00:15:53.730 --- 10.0.0.1 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:53.730 09:06:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.730 09:06:30 -- nvmf/common.sh@421 -- # return 0 00:15:53.730 09:06:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:53.730 09:06:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.730 09:06:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:53.730 09:06:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:53.730 09:06:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.730 09:06:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:53.730 09:06:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:53.730 09:06:30 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:53.730 09:06:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:53.730 09:06:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.730 09:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 09:06:30 -- nvmf/common.sh@469 -- # nvmfpid=71262 00:15:53.730 09:06:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.730 09:06:30 -- nvmf/common.sh@470 -- # waitforlisten 71262 00:15:53.730 09:06:30 -- common/autotest_common.sh@829 -- # '[' -z 71262 ']' 00:15:53.730 09:06:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.730 09:06:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.730 09:06:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.730 09:06:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.730 09:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 [2024-11-17 09:06:30.709740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.992 [2024-11-17 09:06:30.709850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.992 [2024-11-17 09:06:30.851730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.297 [2024-11-17 09:06:30.921759] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.297 [2024-11-17 09:06:30.921928] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.297 [2024-11-17 09:06:30.921944] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.297 [2024-11-17 09:06:30.921955] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.298 [2024-11-17 09:06:30.921993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.890 09:06:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.890 09:06:31 -- common/autotest_common.sh@862 -- # return 0 00:15:54.890 09:06:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.890 09:06:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.890 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:15:54.890 09:06:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.890 09:06:31 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:54.890 09:06:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.890 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:15:54.890 [2024-11-17 09:06:31.757269] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.890 [2024-11-17 09:06:31.765406] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:54.890 null0 00:15:54.890 [2024-11-17 09:06:31.797329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.149 09:06:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.149 09:06:31 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71300 00:15:55.149 09:06:31 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:55.149 09:06:31 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71300 /tmp/host.sock 00:15:55.149 09:06:31 -- common/autotest_common.sh@829 -- # '[' -z 71300 ']' 00:15:55.149 09:06:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:55.149 09:06:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.149 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:55.149 09:06:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:55.149 09:06:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.149 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:15:55.149 [2024-11-17 09:06:31.873734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:55.149 [2024-11-17 09:06:31.873853] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71300 ] 00:15:55.149 [2024-11-17 09:06:32.014387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.408 [2024-11-17 09:06:32.083100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:55.408 [2024-11-17 09:06:32.083300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.977 09:06:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.977 09:06:32 -- common/autotest_common.sh@862 -- # return 0 00:15:55.977 09:06:32 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.977 09:06:32 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:55.977 09:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.977 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:55.977 09:06:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.977 09:06:32 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:55.977 09:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.977 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:55.977 09:06:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.977 09:06:32 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:55.977 09:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.977 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 [2024-11-17 09:06:33.880235] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:57.355 [2024-11-17 09:06:33.880284] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:57.355 [2024-11-17 09:06:33.880303] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:57.355 [2024-11-17 09:06:33.886274] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:57.355 [2024-11-17 09:06:33.941757] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:57.355 [2024-11-17 09:06:33.941822] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:57.355 [2024-11-17 09:06:33.941848] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:57.355 [2024-11-17 09:06:33.941864] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:57.355 [2024-11-17 09:06:33.941887] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:57.355 09:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.355 09:06:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.355 09:06:33 -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 09:06:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.355 [2024-11-17 09:06:33.948915] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc16be0 was disconnected and freed. delete nvme_qpair. 00:15:57.355 09:06:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.355 09:06:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:57.355 09:06:34 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.355 09:06:34 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:57.355 09:06:34 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:57.355 09:06:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.356 09:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.356 09:06:34 -- common/autotest_common.sh@10 -- # set +x 00:15:57.356 09:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:57.356 09:06:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.292 09:06:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.292 09:06:35 -- common/autotest_common.sh@10 -- # set +x 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.292 09:06:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:58.292 09:06:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.228 09:06:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.228 09:06:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.228 09:06:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.228 09:06:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.228 09:06:36 -- common/autotest_common.sh@10 -- # set +x 00:15:59.228 09:06:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.228 09:06:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.228 09:06:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.487 09:06:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:59.487 09:06:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:00.423 09:06:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.423 09:06:37 -- common/autotest_common.sh@10 -- # set +x 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:00.423 09:06:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:00.423 09:06:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:01.359 09:06:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:01.359 09:06:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:01.359 09:06:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.359 09:06:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.359 09:06:38 -- common/autotest_common.sh@10 -- # set +x 00:16:01.359 09:06:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:01.359 09:06:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:01.359 09:06:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.618 09:06:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:01.618 09:06:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:02.555 09:06:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:02.555 09:06:39 -- common/autotest_common.sh@10 -- # set +x 00:16:02.555 09:06:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:02.555 09:06:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:02.555 [2024-11-17 09:06:39.370156] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:02.555 [2024-11-17 09:06:39.370228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.555 [2024-11-17 09:06:39.370242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.555 [2024-11-17 09:06:39.370253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.555 [2024-11-17 09:06:39.370261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.555 [2024-11-17 09:06:39.370270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.555 [2024-11-17 09:06:39.370278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.555 [2024-11-17 09:06:39.370286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.555 [2024-11-17 09:06:39.370293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.555 [2024-11-17 09:06:39.370302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.555 [2024-11-17 09:06:39.370325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.555 [2024-11-17 09:06:39.370350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bde0 is same with the state(5) to be set 00:16:02.555 [2024-11-17 09:06:39.380136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bde0 (9): Bad file descriptor 00:16:02.555 [2024-11-17 09:06:39.390154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:03.491 09:06:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:03.491 09:06:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.491 09:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.491 09:06:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:03.491 09:06:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:03.491 09:06:40 -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 09:06:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:03.491 [2024-11-17 09:06:40.414744] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:04.868 [2024-11-17 09:06:41.434775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:05.805 [2024-11-17 09:06:42.458730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:05.805 [2024-11-17 09:06:42.458887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bde0 with addr=10.0.0.2, port=4420 00:16:05.805 [2024-11-17 09:06:42.458922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bde0 is same with the state(5) to be set 00:16:05.805 [2024-11-17 09:06:42.458977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:05.805 [2024-11-17 09:06:42.459000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:05.805 [2024-11-17 09:06:42.459017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:05.805 [2024-11-17 09:06:42.459036] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:05.805 [2024-11-17 09:06:42.459890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bde0 (9): Bad file descriptor 00:16:05.805 [2024-11-17 09:06:42.459992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:05.805 [2024-11-17 09:06:42.460043] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:05.805 [2024-11-17 09:06:42.460145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.805 [2024-11-17 09:06:42.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.805 [2024-11-17 09:06:42.460209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.805 [2024-11-17 09:06:42.460233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.805 [2024-11-17 09:06:42.460256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.805 [2024-11-17 09:06:42.460276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.805 [2024-11-17 09:06:42.460296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.805 [2024-11-17 09:06:42.460316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.805 [2024-11-17 09:06:42.460338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.805 [2024-11-17 09:06:42.460358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.805 [2024-11-17 09:06:42.460377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:05.805 [2024-11-17 09:06:42.460407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8c1f0 (9): Bad file descriptor 00:16:05.805 [2024-11-17 09:06:42.461012] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:05.805 [2024-11-17 09:06:42.461059] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:05.805 09:06:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.805 09:06:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:05.805 09:06:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.739 09:06:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.739 09:06:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:06.739 09:06:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:06.739 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:06.740 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.740 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:06.740 09:06:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:06.740 09:06:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.740 09:06:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.740 09:06:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.740 09:06:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:06.740 09:06:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:07.701 [2024-11-17 09:06:44.465330] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:07.701 [2024-11-17 09:06:44.465373] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:07.701 [2024-11-17 09:06:44.465407] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:07.701 [2024-11-17 09:06:44.471361] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:07.701 [2024-11-17 09:06:44.526390] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:07.701 [2024-11-17 09:06:44.526450] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:07.701 [2024-11-17 09:06:44.526472] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:07.701 [2024-11-17 09:06:44.526502] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:07.701 [2024-11-17 09:06:44.526510] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:07.701 [2024-11-17 09:06:44.534122] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbcdce0 was disconnected and freed. delete nvme_qpair. 00:16:07.701 09:06:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:07.701 09:06:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.701 09:06:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:07.701 09:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.701 09:06:44 -- common/autotest_common.sh@10 -- # set +x 00:16:07.701 09:06:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:07.701 09:06:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:07.960 09:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.960 09:06:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:07.960 09:06:44 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:07.960 09:06:44 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71300 00:16:07.960 09:06:44 -- common/autotest_common.sh@936 -- # '[' -z 71300 ']' 00:16:07.960 09:06:44 -- common/autotest_common.sh@940 -- # kill -0 71300 00:16:07.960 09:06:44 -- common/autotest_common.sh@941 -- # uname 00:16:07.960 09:06:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:07.960 09:06:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71300 00:16:07.960 09:06:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:07.960 09:06:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:07.960 09:06:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71300' 00:16:07.960 killing process with pid 71300 00:16:07.960 09:06:44 -- common/autotest_common.sh@955 -- # kill 71300 00:16:07.960 09:06:44 -- common/autotest_common.sh@960 -- # wait 71300 00:16:07.960 09:06:44 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:07.960 09:06:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:07.960 09:06:44 -- nvmf/common.sh@116 -- # sync 00:16:08.219 09:06:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:08.219 09:06:44 -- nvmf/common.sh@119 -- # set +e 00:16:08.219 09:06:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:08.219 09:06:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:08.219 rmmod nvme_tcp 00:16:08.219 rmmod nvme_fabrics 00:16:08.219 rmmod nvme_keyring 00:16:08.219 09:06:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:08.219 09:06:44 -- nvmf/common.sh@123 -- # set -e 00:16:08.219 09:06:44 -- nvmf/common.sh@124 -- # return 0 00:16:08.219 09:06:44 -- nvmf/common.sh@477 -- # '[' -n 71262 ']' 00:16:08.219 09:06:44 -- nvmf/common.sh@478 -- # killprocess 71262 00:16:08.219 09:06:44 -- common/autotest_common.sh@936 -- # '[' -z 71262 ']' 00:16:08.219 09:06:44 -- common/autotest_common.sh@940 -- # kill -0 71262 00:16:08.219 09:06:44 -- common/autotest_common.sh@941 -- # uname 00:16:08.219 09:06:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.219 09:06:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71262 00:16:08.219 09:06:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:08.219 09:06:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:08.219 killing process with pid 71262 00:16:08.219 09:06:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71262' 00:16:08.219 09:06:45 -- common/autotest_common.sh@955 -- # kill 71262 00:16:08.219 09:06:45 -- common/autotest_common.sh@960 -- # wait 71262 00:16:08.479 09:06:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:08.479 09:06:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:08.479 09:06:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:08.479 09:06:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.479 09:06:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:08.479 09:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.479 09:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.479 09:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.479 09:06:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:08.479 00:16:08.479 real 0m15.175s 00:16:08.479 user 0m24.387s 00:16:08.479 sys 0m2.365s 00:16:08.479 09:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:08.479 09:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.479 ************************************ 00:16:08.479 END TEST nvmf_discovery_remove_ifc 00:16:08.479 ************************************ 00:16:08.479 09:06:45 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:08.479 09:06:45 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:08.479 09:06:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:08.479 09:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.479 09:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.479 ************************************ 00:16:08.479 START TEST nvmf_digest 00:16:08.479 ************************************ 00:16:08.479 09:06:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:08.479 * Looking for test storage... 00:16:08.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.479 09:06:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:08.479 09:06:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:08.479 09:06:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:08.738 09:06:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:08.738 09:06:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:08.738 09:06:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:08.738 09:06:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:08.738 09:06:45 -- scripts/common.sh@335 -- # IFS=.-: 00:16:08.738 09:06:45 -- scripts/common.sh@335 -- # read -ra ver1 00:16:08.738 09:06:45 -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.738 09:06:45 -- scripts/common.sh@336 -- # read -ra ver2 00:16:08.738 09:06:45 -- scripts/common.sh@337 -- # local 'op=<' 00:16:08.738 09:06:45 -- scripts/common.sh@339 -- # ver1_l=2 00:16:08.738 09:06:45 -- scripts/common.sh@340 -- # ver2_l=1 00:16:08.738 09:06:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:08.738 09:06:45 -- scripts/common.sh@343 -- # case "$op" in 00:16:08.738 09:06:45 -- scripts/common.sh@344 -- # : 1 00:16:08.738 09:06:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:08.738 09:06:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.738 09:06:45 -- scripts/common.sh@364 -- # decimal 1 00:16:08.738 09:06:45 -- scripts/common.sh@352 -- # local d=1 00:16:08.738 09:06:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.738 09:06:45 -- scripts/common.sh@354 -- # echo 1 00:16:08.738 09:06:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:08.738 09:06:45 -- scripts/common.sh@365 -- # decimal 2 00:16:08.738 09:06:45 -- scripts/common.sh@352 -- # local d=2 00:16:08.738 09:06:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.738 09:06:45 -- scripts/common.sh@354 -- # echo 2 00:16:08.738 09:06:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:08.738 09:06:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:08.738 09:06:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:08.738 09:06:45 -- scripts/common.sh@367 -- # return 0 00:16:08.739 09:06:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.739 09:06:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.739 --rc genhtml_branch_coverage=1 00:16:08.739 --rc genhtml_function_coverage=1 00:16:08.739 --rc genhtml_legend=1 00:16:08.739 --rc geninfo_all_blocks=1 00:16:08.739 --rc geninfo_unexecuted_blocks=1 00:16:08.739 00:16:08.739 ' 00:16:08.739 09:06:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.739 --rc genhtml_branch_coverage=1 00:16:08.739 --rc genhtml_function_coverage=1 00:16:08.739 --rc genhtml_legend=1 00:16:08.739 --rc geninfo_all_blocks=1 00:16:08.739 --rc geninfo_unexecuted_blocks=1 00:16:08.739 00:16:08.739 ' 00:16:08.739 09:06:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.739 --rc genhtml_branch_coverage=1 00:16:08.739 --rc genhtml_function_coverage=1 00:16:08.739 --rc genhtml_legend=1 00:16:08.739 --rc geninfo_all_blocks=1 00:16:08.739 --rc geninfo_unexecuted_blocks=1 00:16:08.739 00:16:08.739 ' 00:16:08.739 09:06:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.739 --rc genhtml_branch_coverage=1 00:16:08.739 --rc genhtml_function_coverage=1 00:16:08.739 --rc genhtml_legend=1 00:16:08.739 --rc geninfo_all_blocks=1 00:16:08.739 --rc geninfo_unexecuted_blocks=1 00:16:08.739 00:16:08.739 ' 00:16:08.739 09:06:45 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.739 09:06:45 -- nvmf/common.sh@7 -- # uname -s 00:16:08.739 09:06:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.739 09:06:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.739 09:06:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.739 09:06:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.739 09:06:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.739 09:06:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.739 09:06:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.739 09:06:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.739 09:06:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.739 09:06:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:16:08.739 09:06:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:16:08.739 09:06:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.739 09:06:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.739 09:06:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.739 09:06:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.739 09:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.739 09:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.739 09:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.739 09:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.739 09:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.739 09:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.739 09:06:45 -- paths/export.sh@5 -- # export PATH 00:16:08.739 09:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.739 09:06:45 -- nvmf/common.sh@46 -- # : 0 00:16:08.739 09:06:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.739 09:06:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.739 09:06:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.739 09:06:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.739 09:06:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.739 09:06:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.739 09:06:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.739 09:06:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.739 09:06:45 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:08.739 09:06:45 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:08.739 09:06:45 -- host/digest.sh@16 -- # runtime=2 00:16:08.739 09:06:45 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:08.739 09:06:45 -- host/digest.sh@132 -- # nvmftestinit 00:16:08.739 09:06:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.739 09:06:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.739 09:06:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.739 09:06:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.739 09:06:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.739 09:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.739 09:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.739 09:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.739 09:06:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:08.739 09:06:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:08.739 09:06:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.739 09:06:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.739 09:06:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.739 09:06:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:08.739 09:06:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.739 09:06:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.739 09:06:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.739 09:06:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.739 09:06:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.739 09:06:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.739 09:06:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.739 09:06:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.739 09:06:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:08.739 09:06:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:08.739 Cannot find device "nvmf_tgt_br" 00:16:08.739 09:06:45 -- nvmf/common.sh@154 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.739 Cannot find device "nvmf_tgt_br2" 00:16:08.739 09:06:45 -- nvmf/common.sh@155 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:08.739 09:06:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:08.739 Cannot find device "nvmf_tgt_br" 00:16:08.739 09:06:45 -- nvmf/common.sh@157 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:08.739 Cannot find device "nvmf_tgt_br2" 00:16:08.739 09:06:45 -- nvmf/common.sh@158 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:08.739 09:06:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:08.739 09:06:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.739 09:06:45 -- nvmf/common.sh@161 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.739 09:06:45 -- nvmf/common.sh@162 -- # true 00:16:08.739 09:06:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.739 09:06:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.998 09:06:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.998 09:06:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.998 09:06:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.998 09:06:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.998 09:06:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.998 09:06:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.999 09:06:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.999 09:06:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:08.999 09:06:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:08.999 09:06:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:08.999 09:06:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:08.999 09:06:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.999 09:06:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.999 09:06:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.999 09:06:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:08.999 09:06:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:08.999 09:06:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.999 09:06:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.999 09:06:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.999 09:06:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.999 09:06:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.999 09:06:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:08.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:08.999 00:16:08.999 --- 10.0.0.2 ping statistics --- 00:16:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.999 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:08.999 09:06:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:08.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:08.999 00:16:08.999 --- 10.0.0.3 ping statistics --- 00:16:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.999 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:08.999 09:06:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:08.999 00:16:08.999 --- 10.0.0.1 ping statistics --- 00:16:08.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.999 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:08.999 09:06:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.999 09:06:45 -- nvmf/common.sh@421 -- # return 0 00:16:08.999 09:06:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:08.999 09:06:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.999 09:06:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:08.999 09:06:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:08.999 09:06:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.999 09:06:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:08.999 09:06:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:08.999 09:06:45 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:08.999 09:06:45 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:08.999 09:06:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:08.999 09:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.999 09:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.999 ************************************ 00:16:08.999 START TEST nvmf_digest_clean 00:16:08.999 ************************************ 00:16:08.999 09:06:45 -- common/autotest_common.sh@1114 -- # run_digest 00:16:08.999 09:06:45 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:08.999 09:06:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.999 09:06:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.999 09:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.999 09:06:45 -- nvmf/common.sh@469 -- # nvmfpid=71714 00:16:08.999 09:06:45 -- nvmf/common.sh@470 -- # waitforlisten 71714 00:16:08.999 09:06:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:08.999 09:06:45 -- common/autotest_common.sh@829 -- # '[' -z 71714 ']' 00:16:08.999 09:06:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.999 09:06:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.999 09:06:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.999 09:06:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.999 09:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:09.257 [2024-11-17 09:06:45.939330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.257 [2024-11-17 09:06:45.939457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.257 [2024-11-17 09:06:46.077118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.257 [2024-11-17 09:06:46.133663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.257 [2024-11-17 09:06:46.133818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.257 [2024-11-17 09:06:46.133831] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.258 [2024-11-17 09:06:46.133839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.258 [2024-11-17 09:06:46.133868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.192 09:06:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.192 09:06:46 -- common/autotest_common.sh@862 -- # return 0 00:16:10.192 09:06:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:10.192 09:06:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:10.192 09:06:46 -- common/autotest_common.sh@10 -- # set +x 00:16:10.192 09:06:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.192 09:06:46 -- host/digest.sh@120 -- # common_target_config 00:16:10.192 09:06:46 -- host/digest.sh@43 -- # rpc_cmd 00:16:10.192 09:06:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.192 09:06:46 -- common/autotest_common.sh@10 -- # set +x 00:16:10.192 null0 00:16:10.192 [2024-11-17 09:06:47.000057] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.192 [2024-11-17 09:06:47.024152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.192 09:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.192 09:06:47 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:10.192 09:06:47 -- host/digest.sh@77 -- # local rw bs qd 00:16:10.192 09:06:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:10.192 09:06:47 -- host/digest.sh@80 -- # rw=randread 00:16:10.192 09:06:47 -- host/digest.sh@80 -- # bs=4096 00:16:10.192 09:06:47 -- host/digest.sh@80 -- # qd=128 00:16:10.192 09:06:47 -- host/digest.sh@82 -- # bperfpid=71746 00:16:10.192 09:06:47 -- host/digest.sh@83 -- # waitforlisten 71746 /var/tmp/bperf.sock 00:16:10.192 09:06:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:10.192 09:06:47 -- common/autotest_common.sh@829 -- # '[' -z 71746 ']' 00:16:10.192 09:06:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:10.192 09:06:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.192 09:06:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:10.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:10.192 09:06:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.192 09:06:47 -- common/autotest_common.sh@10 -- # set +x 00:16:10.192 [2024-11-17 09:06:47.078302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:10.193 [2024-11-17 09:06:47.078561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71746 ] 00:16:10.452 [2024-11-17 09:06:47.216923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.452 [2024-11-17 09:06:47.289578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.452 09:06:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.452 09:06:47 -- common/autotest_common.sh@862 -- # return 0 00:16:10.452 09:06:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:10.452 09:06:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:10.452 09:06:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:10.710 09:06:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.710 09:06:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.969 nvme0n1 00:16:10.969 09:06:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:10.969 09:06:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:11.228 Running I/O for 2 seconds... 00:16:13.128 00:16:13.128 Latency(us) 00:16:13.128 [2024-11-17T09:06:50.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.128 [2024-11-17T09:06:50.058Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:13.128 nvme0n1 : 2.00 16408.55 64.10 0.00 0.00 7795.60 6911.07 21209.83 00:16:13.128 [2024-11-17T09:06:50.058Z] =================================================================================================================== 00:16:13.128 [2024-11-17T09:06:50.058Z] Total : 16408.55 64.10 0.00 0.00 7795.60 6911.07 21209.83 00:16:13.128 0 00:16:13.128 09:06:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:13.128 09:06:50 -- host/digest.sh@92 -- # get_accel_stats 00:16:13.128 09:06:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:13.128 09:06:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:13.128 | select(.opcode=="crc32c") 00:16:13.128 | "\(.module_name) \(.executed)"' 00:16:13.128 09:06:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:13.386 09:06:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:13.386 09:06:50 -- host/digest.sh@93 -- # exp_module=software 00:16:13.386 09:06:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:13.386 09:06:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:13.386 09:06:50 -- host/digest.sh@97 -- # killprocess 71746 00:16:13.386 09:06:50 -- common/autotest_common.sh@936 -- # '[' -z 71746 ']' 00:16:13.387 09:06:50 -- common/autotest_common.sh@940 -- # kill -0 71746 00:16:13.387 09:06:50 -- common/autotest_common.sh@941 -- # uname 00:16:13.387 09:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.387 09:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71746 00:16:13.645 killing process with pid 71746 00:16:13.645 Received shutdown signal, test time was about 2.000000 seconds 00:16:13.645 00:16:13.645 Latency(us) 00:16:13.645 [2024-11-17T09:06:50.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.645 [2024-11-17T09:06:50.575Z] =================================================================================================================== 00:16:13.645 [2024-11-17T09:06:50.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.645 09:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:13.645 09:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:13.645 09:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71746' 00:16:13.645 09:06:50 -- common/autotest_common.sh@955 -- # kill 71746 00:16:13.645 09:06:50 -- common/autotest_common.sh@960 -- # wait 71746 00:16:13.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:13.645 09:06:50 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:13.645 09:06:50 -- host/digest.sh@77 -- # local rw bs qd 00:16:13.645 09:06:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:13.645 09:06:50 -- host/digest.sh@80 -- # rw=randread 00:16:13.645 09:06:50 -- host/digest.sh@80 -- # bs=131072 00:16:13.645 09:06:50 -- host/digest.sh@80 -- # qd=16 00:16:13.645 09:06:50 -- host/digest.sh@82 -- # bperfpid=71799 00:16:13.645 09:06:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:13.645 09:06:50 -- host/digest.sh@83 -- # waitforlisten 71799 /var/tmp/bperf.sock 00:16:13.645 09:06:50 -- common/autotest_common.sh@829 -- # '[' -z 71799 ']' 00:16:13.645 09:06:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:13.645 09:06:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.645 09:06:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:13.645 09:06:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.645 09:06:50 -- common/autotest_common.sh@10 -- # set +x 00:16:13.645 [2024-11-17 09:06:50.554192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:13.645 [2024-11-17 09:06:50.554525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71799 ] 00:16:13.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:13.645 Zero copy mechanism will not be used. 00:16:13.904 [2024-11-17 09:06:50.694303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.904 [2024-11-17 09:06:50.755525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.904 09:06:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.904 09:06:50 -- common/autotest_common.sh@862 -- # return 0 00:16:13.904 09:06:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:13.904 09:06:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:13.904 09:06:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:14.162 09:06:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.162 09:06:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.730 nvme0n1 00:16:14.730 09:06:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:14.730 09:06:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:14.730 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:14.730 Zero copy mechanism will not be used. 00:16:14.730 Running I/O for 2 seconds... 00:16:16.634 00:16:16.634 Latency(us) 00:16:16.634 [2024-11-17T09:06:53.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.634 [2024-11-17T09:06:53.564Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:16.634 nvme0n1 : 2.00 7975.88 996.98 0.00 0.00 2003.24 1742.66 5630.14 00:16:16.634 [2024-11-17T09:06:53.564Z] =================================================================================================================== 00:16:16.634 [2024-11-17T09:06:53.564Z] Total : 7975.88 996.98 0.00 0.00 2003.24 1742.66 5630.14 00:16:16.634 0 00:16:16.634 09:06:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:16.634 09:06:53 -- host/digest.sh@92 -- # get_accel_stats 00:16:16.634 09:06:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:16.634 09:06:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:16.634 09:06:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:16.634 | select(.opcode=="crc32c") 00:16:16.634 | "\(.module_name) \(.executed)"' 00:16:17.202 09:06:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:17.202 09:06:53 -- host/digest.sh@93 -- # exp_module=software 00:16:17.202 09:06:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:17.202 09:06:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:17.202 09:06:53 -- host/digest.sh@97 -- # killprocess 71799 00:16:17.202 09:06:53 -- common/autotest_common.sh@936 -- # '[' -z 71799 ']' 00:16:17.202 09:06:53 -- common/autotest_common.sh@940 -- # kill -0 71799 00:16:17.202 09:06:53 -- common/autotest_common.sh@941 -- # uname 00:16:17.202 09:06:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.202 09:06:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71799 00:16:17.202 killing process with pid 71799 00:16:17.202 Received shutdown signal, test time was about 2.000000 seconds 00:16:17.202 00:16:17.202 Latency(us) 00:16:17.202 [2024-11-17T09:06:54.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.202 [2024-11-17T09:06:54.132Z] =================================================================================================================== 00:16:17.202 [2024-11-17T09:06:54.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.202 09:06:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:17.202 09:06:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:17.202 09:06:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71799' 00:16:17.202 09:06:53 -- common/autotest_common.sh@955 -- # kill 71799 00:16:17.202 09:06:53 -- common/autotest_common.sh@960 -- # wait 71799 00:16:17.202 09:06:54 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:17.202 09:06:54 -- host/digest.sh@77 -- # local rw bs qd 00:16:17.202 09:06:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:17.202 09:06:54 -- host/digest.sh@80 -- # rw=randwrite 00:16:17.202 09:06:54 -- host/digest.sh@80 -- # bs=4096 00:16:17.202 09:06:54 -- host/digest.sh@80 -- # qd=128 00:16:17.202 09:06:54 -- host/digest.sh@82 -- # bperfpid=71846 00:16:17.202 09:06:54 -- host/digest.sh@83 -- # waitforlisten 71846 /var/tmp/bperf.sock 00:16:17.202 09:06:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:17.202 09:06:54 -- common/autotest_common.sh@829 -- # '[' -z 71846 ']' 00:16:17.202 09:06:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:17.202 09:06:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.202 09:06:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:17.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:17.202 09:06:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.202 09:06:54 -- common/autotest_common.sh@10 -- # set +x 00:16:17.461 [2024-11-17 09:06:54.137848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.461 [2024-11-17 09:06:54.138167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ] 00:16:17.461 [2024-11-17 09:06:54.277824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.461 [2024-11-17 09:06:54.333817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.462 09:06:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.462 09:06:54 -- common/autotest_common.sh@862 -- # return 0 00:16:17.462 09:06:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:17.462 09:06:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:17.462 09:06:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:18.031 09:06:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.031 09:06:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.290 nvme0n1 00:16:18.290 09:06:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:18.290 09:06:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:18.290 Running I/O for 2 seconds... 00:16:20.830 00:16:20.830 Latency(us) 00:16:20.830 [2024-11-17T09:06:57.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.830 [2024-11-17T09:06:57.760Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.830 nvme0n1 : 2.01 17708.81 69.18 0.00 0.00 7222.87 6076.97 16324.42 00:16:20.830 [2024-11-17T09:06:57.760Z] =================================================================================================================== 00:16:20.830 [2024-11-17T09:06:57.760Z] Total : 17708.81 69.18 0.00 0.00 7222.87 6076.97 16324.42 00:16:20.830 0 00:16:20.830 09:06:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:20.830 09:06:57 -- host/digest.sh@92 -- # get_accel_stats 00:16:20.830 09:06:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:20.830 09:06:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:20.830 | select(.opcode=="crc32c") 00:16:20.830 | "\(.module_name) \(.executed)"' 00:16:20.830 09:06:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:20.830 09:06:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:20.830 09:06:57 -- host/digest.sh@93 -- # exp_module=software 00:16:20.830 09:06:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:20.830 09:06:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:20.830 09:06:57 -- host/digest.sh@97 -- # killprocess 71846 00:16:20.830 09:06:57 -- common/autotest_common.sh@936 -- # '[' -z 71846 ']' 00:16:20.830 09:06:57 -- common/autotest_common.sh@940 -- # kill -0 71846 00:16:20.830 09:06:57 -- common/autotest_common.sh@941 -- # uname 00:16:20.830 09:06:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.830 09:06:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71846 00:16:20.830 killing process with pid 71846 00:16:20.830 Received shutdown signal, test time was about 2.000000 seconds 00:16:20.830 00:16:20.830 Latency(us) 00:16:20.830 [2024-11-17T09:06:57.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.830 [2024-11-17T09:06:57.760Z] =================================================================================================================== 00:16:20.830 [2024-11-17T09:06:57.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.830 09:06:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:20.830 09:06:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:20.830 09:06:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71846' 00:16:20.830 09:06:57 -- common/autotest_common.sh@955 -- # kill 71846 00:16:20.830 09:06:57 -- common/autotest_common.sh@960 -- # wait 71846 00:16:20.830 09:06:57 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:20.830 09:06:57 -- host/digest.sh@77 -- # local rw bs qd 00:16:20.830 09:06:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:20.830 09:06:57 -- host/digest.sh@80 -- # rw=randwrite 00:16:20.830 09:06:57 -- host/digest.sh@80 -- # bs=131072 00:16:20.830 09:06:57 -- host/digest.sh@80 -- # qd=16 00:16:20.830 09:06:57 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:20.830 09:06:57 -- host/digest.sh@82 -- # bperfpid=71900 00:16:20.830 09:06:57 -- host/digest.sh@83 -- # waitforlisten 71900 /var/tmp/bperf.sock 00:16:20.830 09:06:57 -- common/autotest_common.sh@829 -- # '[' -z 71900 ']' 00:16:20.830 09:06:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:20.830 09:06:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.830 09:06:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:20.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:20.830 09:06:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.830 09:06:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.830 [2024-11-17 09:06:57.687453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.830 [2024-11-17 09:06:57.687745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71900 ] 00:16:20.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:20.830 Zero copy mechanism will not be used. 00:16:21.090 [2024-11-17 09:06:57.819483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.090 [2024-11-17 09:06:57.874380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.027 09:06:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.027 09:06:58 -- common/autotest_common.sh@862 -- # return 0 00:16:22.027 09:06:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:22.027 09:06:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:22.027 09:06:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:22.027 09:06:58 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.027 09:06:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.286 nvme0n1 00:16:22.600 09:06:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:22.600 09:06:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:22.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:22.600 Zero copy mechanism will not be used. 00:16:22.600 Running I/O for 2 seconds... 00:16:24.502 00:16:24.502 Latency(us) 00:16:24.502 [2024-11-17T09:07:01.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.502 [2024-11-17T09:07:01.432Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:24.502 nvme0n1 : 2.00 6824.21 853.03 0.00 0.00 2339.67 2025.66 8400.52 00:16:24.502 [2024-11-17T09:07:01.432Z] =================================================================================================================== 00:16:24.502 [2024-11-17T09:07:01.432Z] Total : 6824.21 853.03 0.00 0.00 2339.67 2025.66 8400.52 00:16:24.502 0 00:16:24.502 09:07:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:24.502 09:07:01 -- host/digest.sh@92 -- # get_accel_stats 00:16:24.502 09:07:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:24.502 | select(.opcode=="crc32c") 00:16:24.502 | "\(.module_name) \(.executed)"' 00:16:24.502 09:07:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:24.502 09:07:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:24.761 09:07:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:24.761 09:07:01 -- host/digest.sh@93 -- # exp_module=software 00:16:24.761 09:07:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:24.761 09:07:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:24.761 09:07:01 -- host/digest.sh@97 -- # killprocess 71900 00:16:24.761 09:07:01 -- common/autotest_common.sh@936 -- # '[' -z 71900 ']' 00:16:24.761 09:07:01 -- common/autotest_common.sh@940 -- # kill -0 71900 00:16:24.761 09:07:01 -- common/autotest_common.sh@941 -- # uname 00:16:24.761 09:07:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.761 09:07:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71900 00:16:24.761 killing process with pid 71900 00:16:24.761 Received shutdown signal, test time was about 2.000000 seconds 00:16:24.761 00:16:24.761 Latency(us) 00:16:24.761 [2024-11-17T09:07:01.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.761 [2024-11-17T09:07:01.691Z] =================================================================================================================== 00:16:24.761 [2024-11-17T09:07:01.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.761 09:07:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:24.761 09:07:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:24.761 09:07:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71900' 00:16:24.761 09:07:01 -- common/autotest_common.sh@955 -- # kill 71900 00:16:24.761 09:07:01 -- common/autotest_common.sh@960 -- # wait 71900 00:16:25.020 09:07:01 -- host/digest.sh@126 -- # killprocess 71714 00:16:25.020 09:07:01 -- common/autotest_common.sh@936 -- # '[' -z 71714 ']' 00:16:25.020 09:07:01 -- common/autotest_common.sh@940 -- # kill -0 71714 00:16:25.020 09:07:01 -- common/autotest_common.sh@941 -- # uname 00:16:25.020 09:07:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.020 09:07:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71714 00:16:25.020 killing process with pid 71714 00:16:25.020 09:07:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.020 09:07:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.020 09:07:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71714' 00:16:25.020 09:07:01 -- common/autotest_common.sh@955 -- # kill 71714 00:16:25.020 09:07:01 -- common/autotest_common.sh@960 -- # wait 71714 00:16:25.278 ************************************ 00:16:25.278 END TEST nvmf_digest_clean 00:16:25.278 ************************************ 00:16:25.278 00:16:25.278 real 0m16.162s 00:16:25.278 user 0m31.003s 00:16:25.278 sys 0m4.290s 00:16:25.278 09:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.278 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 09:07:02 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:25.278 09:07:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.278 09:07:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.278 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 ************************************ 00:16:25.278 START TEST nvmf_digest_error 00:16:25.278 ************************************ 00:16:25.278 09:07:02 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:25.278 09:07:02 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:25.278 09:07:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.278 09:07:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.278 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 09:07:02 -- nvmf/common.sh@469 -- # nvmfpid=71983 00:16:25.278 09:07:02 -- nvmf/common.sh@470 -- # waitforlisten 71983 00:16:25.278 09:07:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:25.278 09:07:02 -- common/autotest_common.sh@829 -- # '[' -z 71983 ']' 00:16:25.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.278 09:07:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.278 09:07:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.278 09:07:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.278 09:07:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.278 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 [2024-11-17 09:07:02.145994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.278 [2024-11-17 09:07:02.146114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.537 [2024-11-17 09:07:02.275594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.537 [2024-11-17 09:07:02.330527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.537 [2024-11-17 09:07:02.330742] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.537 [2024-11-17 09:07:02.330756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.537 [2024-11-17 09:07:02.330765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.537 [2024-11-17 09:07:02.330794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.537 09:07:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.537 09:07:02 -- common/autotest_common.sh@862 -- # return 0 00:16:25.537 09:07:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:25.537 09:07:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.537 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.537 09:07:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.537 09:07:02 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:25.537 09:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.537 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.537 [2024-11-17 09:07:02.395200] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:25.537 09:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.537 09:07:02 -- host/digest.sh@104 -- # common_target_config 00:16:25.537 09:07:02 -- host/digest.sh@43 -- # rpc_cmd 00:16:25.537 09:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.537 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.796 null0 00:16:25.796 [2024-11-17 09:07:02.468481] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.796 [2024-11-17 09:07:02.492581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.796 09:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.796 09:07:02 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:25.796 09:07:02 -- host/digest.sh@54 -- # local rw bs qd 00:16:25.796 09:07:02 -- host/digest.sh@56 -- # rw=randread 00:16:25.796 09:07:02 -- host/digest.sh@56 -- # bs=4096 00:16:25.796 09:07:02 -- host/digest.sh@56 -- # qd=128 00:16:25.796 09:07:02 -- host/digest.sh@58 -- # bperfpid=72013 00:16:25.796 09:07:02 -- host/digest.sh@60 -- # waitforlisten 72013 /var/tmp/bperf.sock 00:16:25.796 09:07:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:25.796 09:07:02 -- common/autotest_common.sh@829 -- # '[' -z 72013 ']' 00:16:25.796 09:07:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.796 09:07:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.796 09:07:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.796 09:07:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.796 09:07:02 -- common/autotest_common.sh@10 -- # set +x 00:16:25.796 [2024-11-17 09:07:02.553446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.796 [2024-11-17 09:07:02.553810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72013 ] 00:16:25.796 [2024-11-17 09:07:02.690077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.054 [2024-11-17 09:07:02.747891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.620 09:07:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.620 09:07:03 -- common/autotest_common.sh@862 -- # return 0 00:16:26.620 09:07:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:26.620 09:07:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:27.188 09:07:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:27.188 09:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.188 09:07:03 -- common/autotest_common.sh@10 -- # set +x 00:16:27.188 09:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.188 09:07:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.188 09:07:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.447 nvme0n1 00:16:27.447 09:07:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:27.447 09:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.447 09:07:04 -- common/autotest_common.sh@10 -- # set +x 00:16:27.447 09:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.447 09:07:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:27.447 09:07:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.447 Running I/O for 2 seconds... 00:16:27.447 [2024-11-17 09:07:04.330834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.447 [2024-11-17 09:07:04.330896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.447 [2024-11-17 09:07:04.330927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.447 [2024-11-17 09:07:04.345970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.447 [2024-11-17 09:07:04.346265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.447 [2024-11-17 09:07:04.346301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.447 [2024-11-17 09:07:04.361389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.447 [2024-11-17 09:07:04.361636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.447 [2024-11-17 09:07:04.361672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.705 [2024-11-17 09:07:04.378115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.705 [2024-11-17 09:07:04.378310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.705 [2024-11-17 09:07:04.378344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.705 [2024-11-17 09:07:04.393083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.705 [2024-11-17 09:07:04.393299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.393333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.408896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.408957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.427748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.427786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.427817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.444466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.444501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.444530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.459998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.460033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.460062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.475132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.475183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.475212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.490266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.490459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.490493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.505549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.505781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.505815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.521075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.521265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.521300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.536695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.536733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.536762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.551839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.552070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.567429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.567465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.567494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.583062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.583098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.583127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.600403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.600446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.600460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.706 [2024-11-17 09:07:04.617456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.706 [2024-11-17 09:07:04.617528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.706 [2024-11-17 09:07:04.617557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.636550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.636615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.636646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.654071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.654269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.654287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.672304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.672349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.672364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.689527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.689840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.705367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.705544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.705563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.721092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.721136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.721150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.738901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.739084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.739102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.756888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.756952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.756982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.773087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.773124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.773153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.788245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.788283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.802988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.803051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.817806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.817843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.817872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.832550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.832584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.832642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.847024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.847060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.847088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.861680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.861755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.861784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.965 [2024-11-17 09:07:04.877043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:27.965 [2024-11-17 09:07:04.877102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.965 [2024-11-17 09:07:04.877133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.893223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.893258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.908581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.908669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.923762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.923830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.939050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.939103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.939133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.953545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.953582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.953638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.967843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.967876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.967904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.982232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.982436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.982469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:04.996815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:04.997000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:04.997033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.011854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.011903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.011933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.027076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.027135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.027180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.041888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.042104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.042136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.056764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.056950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.056983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.071505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.071541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.071569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.085911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.086118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.086152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.100471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.100507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.100535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.114983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.115032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.225 [2024-11-17 09:07:05.115060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.225 [2024-11-17 09:07:05.129381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.225 [2024-11-17 09:07:05.129415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.226 [2024-11-17 09:07:05.129444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.226 [2024-11-17 09:07:05.144850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.226 [2024-11-17 09:07:05.144883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.226 [2024-11-17 09:07:05.144911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.485 [2024-11-17 09:07:05.160635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.485 [2024-11-17 09:07:05.160671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.485 [2024-11-17 09:07:05.160699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.175209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.175243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.175271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.189621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.189654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.189682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.204035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.204068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.204096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.218752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.218928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.218961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.233535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.233634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.248334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.248369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.248399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.262871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.262904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.262932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.277440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.277501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.291941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.291975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.292003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.312646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.312679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.312707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.327145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.327178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.341528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.341573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.341602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.356161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.356195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.356223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.370753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.370815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.385274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.385307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.385336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.486 [2024-11-17 09:07:05.399713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.486 [2024-11-17 09:07:05.399746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.486 [2024-11-17 09:07:05.399774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.415490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.415558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.415587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.430881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.430917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.430946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.448062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.448128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.448157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.464414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.464449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.464477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.479181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.479215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.479243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.493568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.493643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.493657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.508213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.508260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.522681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.522713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.522741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.537229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.537276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.537305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.551721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.551753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.551781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.566325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.566359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.566387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.580808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.581037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.595372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.595436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.610297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.610491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.610540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.626101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.626151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.626191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.640858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.641032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.641065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.656355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.656542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.656574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:28.746 [2024-11-17 09:07:05.671808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:28.746 [2024-11-17 09:07:05.672032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:28.746 [2024-11-17 09:07:05.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.689101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.689345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.689487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.707952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.708178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.708341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.726864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.727168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.727360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.746111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.746318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.746500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.764452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.764757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.764958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.782822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.782863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.782893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.798997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.799036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.799065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.815785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.815855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.834337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.834398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.853137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.853201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.871735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.871778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.871823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.889151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.889223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.889253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.905907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.906105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.906138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.006 [2024-11-17 09:07:05.922263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.006 [2024-11-17 09:07:05.922300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.006 [2024-11-17 09:07:05.922329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:05.938239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:05.938273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:05.938302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:05.953370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:05.953405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:05.953434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:05.968540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:05.968577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:05.968620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:05.984087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:05.984122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:05.984150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.000762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.000822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.000851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.015876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.015910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.015938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.030407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.030628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.030646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.045007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.045195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.045229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.059707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.059742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.059770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.074460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.074727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.074748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.090193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.090383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.090416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.104945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.105131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.105164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.119730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.119934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.119968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.134571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.134775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.134808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.149268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.149303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.149333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.164705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.165000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.165020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.266 [2024-11-17 09:07:06.179879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.266 [2024-11-17 09:07:06.180063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.266 [2024-11-17 09:07:06.180097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.195533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.195567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.195595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.210343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.210571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.210604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.225347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.225671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.225690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.240681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.240854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.240887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.255771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.255805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.270304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.270490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.270523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.285177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.285363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.285395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 [2024-11-17 09:07:06.300104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1592d40) 00:16:29.526 [2024-11-17 09:07:06.300289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.526 [2024-11-17 09:07:06.300322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.526 00:16:29.526 Latency(us) 00:16:29.526 [2024-11-17T09:07:06.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.526 [2024-11-17T09:07:06.456Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:29.526 nvme0n1 : 2.01 16213.24 63.33 0.00 0.00 7889.72 6881.28 28240.06 00:16:29.526 [2024-11-17T09:07:06.456Z] =================================================================================================================== 00:16:29.526 [2024-11-17T09:07:06.456Z] Total : 16213.24 63.33 0.00 0.00 7889.72 6881.28 28240.06 00:16:29.526 0 00:16:29.526 09:07:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:29.526 09:07:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:29.526 09:07:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:29.526 | .driver_specific 00:16:29.526 | .nvme_error 00:16:29.526 | .status_code 00:16:29.526 | .command_transient_transport_error' 00:16:29.526 09:07:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:29.785 09:07:06 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:16:29.785 09:07:06 -- host/digest.sh@73 -- # killprocess 72013 00:16:29.785 09:07:06 -- common/autotest_common.sh@936 -- # '[' -z 72013 ']' 00:16:29.785 09:07:06 -- common/autotest_common.sh@940 -- # kill -0 72013 00:16:29.785 09:07:06 -- common/autotest_common.sh@941 -- # uname 00:16:29.785 09:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.785 09:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72013 00:16:29.785 killing process with pid 72013 00:16:29.785 Received shutdown signal, test time was about 2.000000 seconds 00:16:29.785 00:16:29.785 Latency(us) 00:16:29.785 [2024-11-17T09:07:06.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.785 [2024-11-17T09:07:06.715Z] =================================================================================================================== 00:16:29.785 [2024-11-17T09:07:06.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.785 09:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.786 09:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.786 09:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72013' 00:16:29.786 09:07:06 -- common/autotest_common.sh@955 -- # kill 72013 00:16:29.786 09:07:06 -- common/autotest_common.sh@960 -- # wait 72013 00:16:30.044 09:07:06 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:30.044 09:07:06 -- host/digest.sh@54 -- # local rw bs qd 00:16:30.044 09:07:06 -- host/digest.sh@56 -- # rw=randread 00:16:30.044 09:07:06 -- host/digest.sh@56 -- # bs=131072 00:16:30.044 09:07:06 -- host/digest.sh@56 -- # qd=16 00:16:30.044 09:07:06 -- host/digest.sh@58 -- # bperfpid=72068 00:16:30.044 09:07:06 -- host/digest.sh@60 -- # waitforlisten 72068 /var/tmp/bperf.sock 00:16:30.044 09:07:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:30.044 09:07:06 -- common/autotest_common.sh@829 -- # '[' -z 72068 ']' 00:16:30.044 09:07:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.044 09:07:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.044 09:07:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.044 09:07:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.044 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 [2024-11-17 09:07:06.876137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:30.045 [2024-11-17 09:07:06.876390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72068 ] 00:16:30.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.045 Zero copy mechanism will not be used. 00:16:30.303 [2024-11-17 09:07:07.006700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.303 [2024-11-17 09:07:07.062970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.240 09:07:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.240 09:07:07 -- common/autotest_common.sh@862 -- # return 0 00:16:31.240 09:07:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.240 09:07:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.240 09:07:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:31.240 09:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.240 09:07:08 -- common/autotest_common.sh@10 -- # set +x 00:16:31.240 09:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.240 09:07:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.240 09:07:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.499 nvme0n1 00:16:31.499 09:07:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:31.499 09:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.499 09:07:08 -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 09:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.499 09:07:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:31.499 09:07:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:31.761 Zero copy mechanism will not be used. 00:16:31.761 Running I/O for 2 seconds... 00:16:31.761 [2024-11-17 09:07:08.499480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.499526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.499541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.504100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.504139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.504184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.508850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.508892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.508906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.513380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.513417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.513446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.518109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.518320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.518354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.523018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.523256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.528060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.528300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.528459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.532815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.533018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.533158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.537314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.537535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.537794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.542199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.542409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.542620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.546844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.547046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.547248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.551333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.551553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.551733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.556093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.556291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.556326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.560570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.761 [2024-11-17 09:07:08.560794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.761 [2024-11-17 09:07:08.560939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.761 [2024-11-17 09:07:08.565435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.565643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.570249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.570450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.570573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.574798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.574835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.574864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.578951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.578988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.579017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.583116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.583181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.587305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.587341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.587370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.591455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.591492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.591520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.595551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.595588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.595646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.599644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.599679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.599707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.603849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.603885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.607916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.607999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.611916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.611981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.615927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.615963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.615991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.619955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.619991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.620020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.624078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.624114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.624158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.628182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.628219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.628248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.632220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.632257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.632286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.636434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.636471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.640582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.640645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.640675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.644664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.644699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.644727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.648667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.648702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.648730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.652770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.652805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.652834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.656826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.656891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.660837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.660873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.660901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.664859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.664894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.664923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.668899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.668934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.668963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.672893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.672928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.672956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.676921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.676956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.676985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.762 [2024-11-17 09:07:08.680991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:31.762 [2024-11-17 09:07:08.681027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.762 [2024-11-17 09:07:08.681055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.685454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.685491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.685520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.690054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.690094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.690108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.694724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.694762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.694792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.699295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.699352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.699367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.703948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.704138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.704158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.708763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.708804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.708818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.713288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.713330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.713360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.717941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.717983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.722594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.722663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.722680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.727535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.727577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.727607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.732007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.732046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.732075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.736800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.736837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.029 [2024-11-17 09:07:08.736867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.029 [2024-11-17 09:07:08.741358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.029 [2024-11-17 09:07:08.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.746219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.746424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.746443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.750927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.750964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.750993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.755054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.755090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.755119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.759349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.759386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.759415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.763588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.763633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.763662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.767726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.767761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.767790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.771862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.771897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.771926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.776091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.776128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.776174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.780370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.784571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.784635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.784665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.788710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.788773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.792775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.792829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.792859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.796797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.796832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.796861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.801066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.801102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.801132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.805430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.805468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.805497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.810274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.810315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.810329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.814892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.814930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.814943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.819726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.819762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.819791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.824250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.824291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.824305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.828787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.828822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.828851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.833119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.030 [2024-11-17 09:07:08.833201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.030 [2024-11-17 09:07:08.837445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.030 [2024-11-17 09:07:08.837483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.837496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.841942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.841998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.842012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.846356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.846405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.850531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.850567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.850597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.854753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.854788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.854816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.858839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.858874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.858903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.862867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.862930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.867024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.867069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.867098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.871182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.871246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.875272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.875309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.875337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.879397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.879433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.879461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.883498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.883534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.883563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.887670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.887704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.887733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.891858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.891894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.891922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.895892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.895928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.895957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.899994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.900030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.900058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.904043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.904080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.904109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.908186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.908223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.908252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.031 [2024-11-17 09:07:08.912278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.031 [2024-11-17 09:07:08.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.031 [2024-11-17 09:07:08.912344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.916497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.916534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.916578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.920647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.920681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.920708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.924822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.924858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.924887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.929101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.929152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.929181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.933351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.933388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.933417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.937693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.937781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.937811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.942634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.942715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.942730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.947381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.947420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.947433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.032 [2024-11-17 09:07:08.952073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.032 [2024-11-17 09:07:08.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.032 [2024-11-17 09:07:08.952138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.956895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.956930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.956958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.961556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.961778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.961797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.966423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.966649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.966813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.971075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.971282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.971486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.976016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.976224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.976426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.980676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.980880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.981080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.985112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.985324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.985480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.989650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.989853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.989992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.994318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.994508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.994740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:08.998845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:08.999054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:08.999210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:09.003493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:09.003707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:09.003909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.299 [2024-11-17 09:07:09.008215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.299 [2024-11-17 09:07:09.008414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.299 [2024-11-17 09:07:09.008557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.012752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.012950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.013095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.017229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.017414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.017558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.021936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.022187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.022224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.026356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.026393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.026421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.030941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.030978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.031007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.035629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.035717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.035749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.039941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.039977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.040006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.044089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.044126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.044156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.048193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.048229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.048257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.052255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.052292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.052320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.056274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.056310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.060389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.060425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.060454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.064536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.064572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.064600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.068634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.068668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.068696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.072663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.072697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.072726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.076662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.076697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.076725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.080689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.080752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.084796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.084832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.084861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.088780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.088816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.092743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.092779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.092807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.096686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.096720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.096748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.100741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.100804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.104708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.104756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.104786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.108709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.108743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.108771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.112712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.112747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.112775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.116699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.116761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.120763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.120798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.120826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.124711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.124745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.124774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.300 [2024-11-17 09:07:09.128740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.300 [2024-11-17 09:07:09.128774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.300 [2024-11-17 09:07:09.128802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.132822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.132858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.132887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.136904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.136940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.136968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.140911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.140947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.140976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.145133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.145199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.145228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.150076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.150123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.150147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.154787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.154821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.154849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.159502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.159600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.159659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.164179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.164362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.168995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.169031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.169060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.173679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.173743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.173758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.178283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.178339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.182925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.182961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.182990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.187714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.187749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.187777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.192468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.192566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.192594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.197167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.197209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.197223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.201776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.201816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.201830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.206567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.206632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.211227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.211270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.211284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.216009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.216046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.216076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.301 [2024-11-17 09:07:09.220872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.301 [2024-11-17 09:07:09.220908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.301 [2024-11-17 09:07:09.220937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.560 [2024-11-17 09:07:09.225631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.560 [2024-11-17 09:07:09.225704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.560 [2024-11-17 09:07:09.225737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.230151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.230215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.230228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.235083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.235123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.235165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.239875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.239913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.239941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.244410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.244450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.248902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.248938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.248967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.253095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.253132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.253177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.257415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.257483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.261745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.261785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.261799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.266055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.266107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.266135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.270320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.270357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.270385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.274503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.274540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.274569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.278864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.278902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.278931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.283042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.283079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.283108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.287321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.287391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.287420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.292018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.292057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.292087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.296344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.296382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.296411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.300691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.300726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.300755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.305196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.305236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.305266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.309882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.309924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.309938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.314426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.314466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.314495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.319175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.319215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.319244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.323780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.323819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.323850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.328340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.328377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.328406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.332735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.332774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.332805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.337106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.337159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.337189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.341471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.341526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.341571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.345875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.345918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.350119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.350156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.350185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.354485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.354538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.358841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.358879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.561 [2024-11-17 09:07:09.358908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.561 [2024-11-17 09:07:09.363053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.561 [2024-11-17 09:07:09.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.363119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.367561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.367644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.367660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.371827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.371866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.371896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.376048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.376086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.376115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.380426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.380492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.380512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.384833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.384870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.384899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.389035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.389072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.393516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.393554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.393583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.397602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.397637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.401676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.401749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.401778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.406034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.406098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.410126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.410174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.410203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.414326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.414362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.414391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.418878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.418915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.418944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.423208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.423246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.423277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.427450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.427487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.427517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.431834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.431870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.431898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.435959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.435995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.436024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.440128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.440165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.440193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.444497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.444534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.444563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.448665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.448700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.448728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.452938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.453003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.457411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.457448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.457476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.461649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.461684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.461737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.466101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.466151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.470300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.470336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.470348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.474666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.474728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.474759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.478919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.478954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.478982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.562 [2024-11-17 09:07:09.483044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.562 [2024-11-17 09:07:09.483097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.562 [2024-11-17 09:07:09.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.487823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.487874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.487903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.492595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.492663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.492678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.496961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.496996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.497025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.501075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.501274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.501307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.505558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.505653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.509948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.509989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.510003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.514407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.514444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.518938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.518991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.519035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.523469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.523508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.523552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.528238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.528275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.528305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.532872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.532912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.822 [2024-11-17 09:07:09.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.822 [2024-11-17 09:07:09.537411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.822 [2024-11-17 09:07:09.537448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.537477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.542138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.542174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.542203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.546654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.546722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.546738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.551449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.551709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.551744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.556127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.556164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.556192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.560258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.560293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.560323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.564471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.564507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.564536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.568737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.568771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.568800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.572887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.572922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.576904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.576940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.576969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.580967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.581003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.581031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.585046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.585082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.585110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.589164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.589200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.589229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.593398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.593435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.593463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.597532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.597569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.597598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.601604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.601638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.605637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.605673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.605726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.609998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.610053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.610082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.614065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.614116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.614145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.618218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.618254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.618283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.622289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.622325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.622353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.626384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.626421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.626449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.630514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.630549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.630577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.634622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.634682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.634696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.638655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.638701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.638730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.642781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.642817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.642845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.823 [2024-11-17 09:07:09.646799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.823 [2024-11-17 09:07:09.646834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.823 [2024-11-17 09:07:09.646863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.650879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.650915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.650943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.654988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.655039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.655067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.659212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.659249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.659277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.663361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.663409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.667513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.667565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.667592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.671499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.671534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.671562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.675551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.675586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.675625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.679515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.683545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.683581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.683641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.687659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.691647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.691682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.691710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.695721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.695755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.695782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.699724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.699758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.699786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.703691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.703725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.703753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.707661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.707722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.711570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.711651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.711665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.715512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.715547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.715576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.719445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.719507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.723448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.723483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.723511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.727561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.727624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.731829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.731864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.736064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.736100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.740266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.740303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.740331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.824 [2024-11-17 09:07:09.744744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:32.824 [2024-11-17 09:07:09.744779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.824 [2024-11-17 09:07:09.744809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.085 [2024-11-17 09:07:09.749184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.085 [2024-11-17 09:07:09.749221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.085 [2024-11-17 09:07:09.749249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.085 [2024-11-17 09:07:09.753588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.085 [2024-11-17 09:07:09.753652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.085 [2024-11-17 09:07:09.753682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.085 [2024-11-17 09:07:09.757926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.085 [2024-11-17 09:07:09.757966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.085 [2024-11-17 09:07:09.757980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.085 [2024-11-17 09:07:09.762193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.085 [2024-11-17 09:07:09.762396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.085 [2024-11-17 09:07:09.762415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.085 [2024-11-17 09:07:09.766700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.085 [2024-11-17 09:07:09.766767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.085 [2024-11-17 09:07:09.766796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.771113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.771177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.775292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.775328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.775356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.779498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.779536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.779579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.783660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.783695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.783723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.788260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.788297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.792435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.792472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.792501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.796649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.796684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.796712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.800777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.800811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.800840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.805001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.805037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.805065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.809526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.809565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.809594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.813791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.813833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.813847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.817919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.817959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.817973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.822322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.822361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.826919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.826956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.831724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.831760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.831788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.836453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.836495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.836509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.841103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.841158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.841189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.845734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.845774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.845788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.850346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.850385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.850414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.854806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.854842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.854870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.859142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.859179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.859191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.863381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.863416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.863444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.867569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.867634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.867664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.871827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.086 [2024-11-17 09:07:09.871863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.086 [2024-11-17 09:07:09.871892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.086 [2024-11-17 09:07:09.875961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.875997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.876025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.880174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.880211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.880240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.884386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.884422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.884451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.888581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.888643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.888674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.892720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.892755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.892782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.896761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.896795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.896823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.900815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.900850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.900878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.904853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.904890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.904918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.909115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.909180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.913284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.913350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.917383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.917420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.921455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.921519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.925508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.925543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.925571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.929648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.929683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.929736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.933651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.933685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.933742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.937940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.937980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.937993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.941911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.941951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.941982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.945917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.945955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.945984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.949810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.949876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.953693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.953767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.957677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.957750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.957778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.961651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.961685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.961753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.965688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.965751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.965765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.087 [2024-11-17 09:07:09.969874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.087 [2024-11-17 09:07:09.969911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.087 [2024-11-17 09:07:09.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.973931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.973969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.973982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.977998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.978051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.978095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.982096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.982146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.982174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.986567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.986632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.990985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.991035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.991063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.995374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.995424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:09.999728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:09.999762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:09.999791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:10.004301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:10.004342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:10.004371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.088 [2024-11-17 09:07:10.008874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.088 [2024-11-17 09:07:10.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.088 [2024-11-17 09:07:10.008942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.013379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.013417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.013446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.017849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.017889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.017903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.022277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.022314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.022343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.026664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.026711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.026740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.030761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.030797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.030825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.034928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.034964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.034992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.039085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.039120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.039148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.043364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.043402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.043430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.047515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.047551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.047579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.051694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.051729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.051757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.055811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.055848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.055860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.059914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.059950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.059978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.064284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.064323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.064352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.068869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.068922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.068950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.073039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.073074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.073104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.077175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.077210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.077238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.081266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.081301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.081330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.085406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.085442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.089557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.089619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.089633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.093769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.093808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.093838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.097849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.097918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.101928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.101967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.106082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.349 [2024-11-17 09:07:10.106132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.349 [2024-11-17 09:07:10.106161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.349 [2024-11-17 09:07:10.110241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.110306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.114381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.114417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.114446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.118527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.118563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.118592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.122569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.122649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.122663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.126703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.126737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.130839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.130874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.130903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.134955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.135034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.138984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.143097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.143161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.147357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.147395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.147424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.151512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.151548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.151577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.155655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.155690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.155719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.159803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.159839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.159867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.163993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.164028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.164056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.168200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.168237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.172428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.172464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.172493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.176524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.176604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.180701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.180736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.184827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.184861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.184890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.188995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.189031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.189043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.193442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.193496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.193509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.197944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.197984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.198013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.202205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.202406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.202424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.206693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.206882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.206916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.211041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.211081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.211110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.215297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.215333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.215362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.350 [2024-11-17 09:07:10.219434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.350 [2024-11-17 09:07:10.219471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.350 [2024-11-17 09:07:10.219500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.223600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.223681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.227730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.227764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.227792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.231772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.231806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.235836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.235871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.239918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.239953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.239981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.243960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.243996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.244024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.247973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.248008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.248037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.252053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.252088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.252116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.256184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.256220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.260245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.260281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.260310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.264396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.264432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.264461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.268502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.268553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.268581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.351 [2024-11-17 09:07:10.273061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.351 [2024-11-17 09:07:10.273127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.351 [2024-11-17 09:07:10.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.277417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.277481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.281841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.281882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.281896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.285957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.285996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.286026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.290124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.290159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.290188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.294297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.294332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.294361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.298503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.298539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.298567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.302635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.302698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.302713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.306782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.306819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.306847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.310870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.310906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.310935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.314947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.314983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.315026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.319065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.319101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.319129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.323474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.323511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.323539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.328074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.328110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.328153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.332238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.611 [2024-11-17 09:07:10.332274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.611 [2024-11-17 09:07:10.332303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.611 [2024-11-17 09:07:10.336448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.336484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.336513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.340666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.340701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.340729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.344833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.345031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.345240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.349319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.349530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.354058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.354112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.354141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.358280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.358316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.358345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.362417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.362453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.362481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.366652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.366719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.366749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.370784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.370819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.370849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.374916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.374951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.374980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.379053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.379088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.379116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.383289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.383325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.383353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.387841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.387877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.387906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.392396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.392450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.392468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.397089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.397129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.397175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.401744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.401785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.406411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.406453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.406467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.411035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.411072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.411101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.415519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.415557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.415585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.420167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.420397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.420431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.424759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.424797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.424825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.428967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.429005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.429034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.433303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.433359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.437575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.437642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.437672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.441990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.442045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.442074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.446360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.446399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.612 [2024-11-17 09:07:10.446412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.612 [2024-11-17 09:07:10.450644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.612 [2024-11-17 09:07:10.450690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.450717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.454843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.454878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.454907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.459177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.459219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.459233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.463678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.463716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.463745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.468276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.468315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.468361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.473016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.473075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.473089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.477742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.477785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.477799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.482379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.482417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.482446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.613 [2024-11-17 09:07:10.487125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e2d940) 00:16:33.613 [2024-11-17 09:07:10.487165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.613 [2024-11-17 09:07:10.487193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.613 00:16:33.613 Latency(us) 00:16:33.613 [2024-11-17T09:07:10.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.613 [2024-11-17T09:07:10.543Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:33.613 nvme0n1 : 2.00 7164.56 895.57 0.00 0.00 2230.28 1742.66 8996.31 00:16:33.613 [2024-11-17T09:07:10.543Z] =================================================================================================================== 00:16:33.613 [2024-11-17T09:07:10.543Z] Total : 7164.56 895.57 0.00 0.00 2230.28 1742.66 8996.31 00:16:33.613 0 00:16:33.613 09:07:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:33.613 09:07:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:33.613 09:07:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:33.613 09:07:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:33.613 | .driver_specific 00:16:33.613 | .nvme_error 00:16:33.613 | .status_code 00:16:33.613 | .command_transient_transport_error' 00:16:33.872 09:07:10 -- host/digest.sh@71 -- # (( 462 > 0 )) 00:16:33.872 09:07:10 -- host/digest.sh@73 -- # killprocess 72068 00:16:33.872 09:07:10 -- common/autotest_common.sh@936 -- # '[' -z 72068 ']' 00:16:33.872 09:07:10 -- common/autotest_common.sh@940 -- # kill -0 72068 00:16:33.872 09:07:10 -- common/autotest_common.sh@941 -- # uname 00:16:33.872 09:07:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.872 09:07:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72068 00:16:34.131 killing process with pid 72068 00:16:34.131 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.131 00:16:34.131 Latency(us) 00:16:34.131 [2024-11-17T09:07:11.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.131 [2024-11-17T09:07:11.061Z] =================================================================================================================== 00:16:34.131 [2024-11-17T09:07:11.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.131 09:07:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:34.131 09:07:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:34.131 09:07:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72068' 00:16:34.131 09:07:10 -- common/autotest_common.sh@955 -- # kill 72068 00:16:34.131 09:07:10 -- common/autotest_common.sh@960 -- # wait 72068 00:16:34.131 09:07:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:34.131 09:07:11 -- host/digest.sh@54 -- # local rw bs qd 00:16:34.131 09:07:11 -- host/digest.sh@56 -- # rw=randwrite 00:16:34.131 09:07:11 -- host/digest.sh@56 -- # bs=4096 00:16:34.131 09:07:11 -- host/digest.sh@56 -- # qd=128 00:16:34.131 09:07:11 -- host/digest.sh@58 -- # bperfpid=72128 00:16:34.131 09:07:11 -- host/digest.sh@60 -- # waitforlisten 72128 /var/tmp/bperf.sock 00:16:34.131 09:07:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:34.131 09:07:11 -- common/autotest_common.sh@829 -- # '[' -z 72128 ']' 00:16:34.131 09:07:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:34.131 09:07:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.131 09:07:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:34.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:34.131 09:07:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.131 09:07:11 -- common/autotest_common.sh@10 -- # set +x 00:16:34.390 [2024-11-17 09:07:11.064079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:34.390 [2024-11-17 09:07:11.064393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72128 ] 00:16:34.390 [2024-11-17 09:07:11.199092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.390 [2024-11-17 09:07:11.254919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.326 09:07:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.326 09:07:12 -- common/autotest_common.sh@862 -- # return 0 00:16:35.326 09:07:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:35.326 09:07:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:35.585 09:07:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:35.585 09:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.585 09:07:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.585 09:07:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.585 09:07:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.585 09:07:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:35.843 nvme0n1 00:16:35.843 09:07:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:35.843 09:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.843 09:07:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.843 09:07:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.843 09:07:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:35.844 09:07:12 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:36.103 Running I/O for 2 seconds... 00:16:36.103 [2024-11-17 09:07:12.807234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ddc00 00:16:36.103 [2024-11-17 09:07:12.808692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.808747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.823681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fef90 00:16:36.103 [2024-11-17 09:07:12.825004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.825041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.838713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ff3c8 00:16:36.103 [2024-11-17 09:07:12.840036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.840071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.854205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190feb58 00:16:36.103 [2024-11-17 09:07:12.855482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.855517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.868983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fe720 00:16:36.103 [2024-11-17 09:07:12.870386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.870424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.884198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fe2e8 00:16:36.103 [2024-11-17 09:07:12.885651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.885879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.900973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fdeb0 00:16:36.103 [2024-11-17 09:07:12.902443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.902527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.918344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fda78 00:16:36.103 [2024-11-17 09:07:12.919858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.919904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.934614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fd640 00:16:36.103 [2024-11-17 09:07:12.935892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.935927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.949193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fd208 00:16:36.103 [2024-11-17 09:07:12.950468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.950674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.963729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fcdd0 00:16:36.103 [2024-11-17 09:07:12.965157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.965196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.978463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fc998 00:16:36.103 [2024-11-17 09:07:12.979749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.103 [2024-11-17 09:07:12.979782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:36.103 [2024-11-17 09:07:12.992677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fc560 00:16:36.103 [2024-11-17 09:07:12.993898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.104 [2024-11-17 09:07:12.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:36.104 [2024-11-17 09:07:13.006939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fc128 00:16:36.104 [2024-11-17 09:07:13.008166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.104 [2024-11-17 09:07:13.008199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:36.104 [2024-11-17 09:07:13.021231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fbcf0 00:16:36.104 [2024-11-17 09:07:13.022445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.104 [2024-11-17 09:07:13.022479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.036696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fb8b8 00:16:36.363 [2024-11-17 09:07:13.037893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.038097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.051204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fb480 00:16:36.363 [2024-11-17 09:07:13.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.052537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.065526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fb048 00:16:36.363 [2024-11-17 09:07:13.066772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.066807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.079944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fac10 00:16:36.363 [2024-11-17 09:07:13.081106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.081139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.094145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fa7d8 00:16:36.363 [2024-11-17 09:07:13.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.108471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190fa3a0 00:16:36.363 [2024-11-17 09:07:13.109634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.109693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.123115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f9f68 00:16:36.363 [2024-11-17 09:07:13.124231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.124406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.137558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f9b30 00:16:36.363 [2024-11-17 09:07:13.138745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.138779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.152057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f96f8 00:16:36.363 [2024-11-17 09:07:13.153572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.153612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.168897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f92c0 00:16:36.363 [2024-11-17 09:07:13.170214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.170248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.184059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f8e88 00:16:36.363 [2024-11-17 09:07:13.185151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.185184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.198535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f8a50 00:16:36.363 [2024-11-17 09:07:13.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.212935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f8618 00:16:36.363 [2024-11-17 09:07:13.214097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.214145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.227542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f81e0 00:16:36.363 [2024-11-17 09:07:13.228611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.228845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.242246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f7da8 00:16:36.363 [2024-11-17 09:07:13.243499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.243723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.256898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f7970 00:16:36.363 [2024-11-17 09:07:13.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.258391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.271376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f7538 00:16:36.363 [2024-11-17 09:07:13.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.272830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:36.363 [2024-11-17 09:07:13.286046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f7100 00:16:36.363 [2024-11-17 09:07:13.287355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.363 [2024-11-17 09:07:13.287575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.301446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f6cc8 00:16:36.623 [2024-11-17 09:07:13.302703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.302911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.316446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f6890 00:16:36.623 [2024-11-17 09:07:13.317750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.317944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.331188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f6458 00:16:36.623 [2024-11-17 09:07:13.332350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.332549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.345882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f6020 00:16:36.623 [2024-11-17 09:07:13.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.347277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.360446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f5be8 00:16:36.623 [2024-11-17 09:07:13.361603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.361655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.375229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f57b0 00:16:36.623 [2024-11-17 09:07:13.376186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.376348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.389757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f5378 00:16:36.623 [2024-11-17 09:07:13.390717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.404035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f4f40 00:16:36.623 [2024-11-17 09:07:13.404999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.405063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.420192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f4b08 00:16:36.623 [2024-11-17 09:07:13.421258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.421293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.436422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f46d0 00:16:36.623 [2024-11-17 09:07:13.437553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.437586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.451109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f4298 00:16:36.623 [2024-11-17 09:07:13.452056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.452104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.465484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f3e60 00:16:36.623 [2024-11-17 09:07:13.466501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.466733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.480082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f3a28 00:16:36.623 [2024-11-17 09:07:13.481018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.481081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.494439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f35f0 00:16:36.623 [2024-11-17 09:07:13.495315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.495365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.508745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f31b8 00:16:36.623 [2024-11-17 09:07:13.509670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.509929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.523209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f2d80 00:16:36.623 [2024-11-17 09:07:13.524105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.524155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:36.623 [2024-11-17 09:07:13.537454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f2948 00:16:36.623 [2024-11-17 09:07:13.538397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.623 [2024-11-17 09:07:13.538431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:36.882 [2024-11-17 09:07:13.552881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f2510 00:16:36.882 [2024-11-17 09:07:13.553851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.882 [2024-11-17 09:07:13.554008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:36.882 [2024-11-17 09:07:13.567642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f20d8 00:16:36.882 [2024-11-17 09:07:13.568434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.882 [2024-11-17 09:07:13.568471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:36.882 [2024-11-17 09:07:13.581999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f1ca0 00:16:36.882 [2024-11-17 09:07:13.583063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.583096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.597193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f1868 00:16:36.883 [2024-11-17 09:07:13.598038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.598237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.613636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f1430 00:16:36.883 [2024-11-17 09:07:13.614560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.614618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.629483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f0ff8 00:16:36.883 [2024-11-17 09:07:13.630397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.630432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.644933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f0bc0 00:16:36.883 [2024-11-17 09:07:13.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.660037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f0788 00:16:36.883 [2024-11-17 09:07:13.660925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.660962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.676885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190f0350 00:16:36.883 [2024-11-17 09:07:13.677949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.677982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.694980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eff18 00:16:36.883 [2024-11-17 09:07:13.695987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.696033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.711917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190efae0 00:16:36.883 [2024-11-17 09:07:13.712701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.712766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.728580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ef6a8 00:16:36.883 [2024-11-17 09:07:13.729345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.729383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.744917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ef270 00:16:36.883 [2024-11-17 09:07:13.745638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.745844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.760267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eee38 00:16:36.883 [2024-11-17 09:07:13.760983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.761153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.774715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eea00 00:16:36.883 [2024-11-17 09:07:13.775411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.775562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.789125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ee5c8 00:16:36.883 [2024-11-17 09:07:13.789851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.790021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:36.883 [2024-11-17 09:07:13.803684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ee190 00:16:36.883 [2024-11-17 09:07:13.804340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:36.883 [2024-11-17 09:07:13.804377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.819768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190edd58 00:16:37.142 [2024-11-17 09:07:13.820414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.820442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.836787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ed920 00:16:37.142 [2024-11-17 09:07:13.837452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.837492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.852017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ed4e8 00:16:37.142 [2024-11-17 09:07:13.852668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.852864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.867103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ed0b0 00:16:37.142 [2024-11-17 09:07:13.867783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.882879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ecc78 00:16:37.142 [2024-11-17 09:07:13.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.883567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.897899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ec840 00:16:37.142 [2024-11-17 09:07:13.898761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.898793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.914330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ec408 00:16:37.142 [2024-11-17 09:07:13.915028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.915068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.931130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ebfd0 00:16:37.142 [2024-11-17 09:07:13.931821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.948142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ebb98 00:16:37.142 [2024-11-17 09:07:13.948746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.948785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.964252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eb760 00:16:37.142 [2024-11-17 09:07:13.964865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.964907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.979938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eb328 00:16:37.142 [2024-11-17 09:07:13.980520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.980557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:37.142 [2024-11-17 09:07:13.994798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eaef0 00:16:37.142 [2024-11-17 09:07:13.995422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.142 [2024-11-17 09:07:13.995477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:37.143 [2024-11-17 09:07:14.010120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190eaab8 00:16:37.143 [2024-11-17 09:07:14.010665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.143 [2024-11-17 09:07:14.010715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:37.143 [2024-11-17 09:07:14.025057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ea680 00:16:37.143 [2024-11-17 09:07:14.025577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.143 [2024-11-17 09:07:14.025631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:37.143 [2024-11-17 09:07:14.041295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190ea248 00:16:37.143 [2024-11-17 09:07:14.042093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.143 [2024-11-17 09:07:14.042154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:37.143 [2024-11-17 09:07:14.057066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e9e10 00:16:37.143 [2024-11-17 09:07:14.057573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.143 [2024-11-17 09:07:14.057635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:37.401 [2024-11-17 09:07:14.072814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e99d8 00:16:37.401 [2024-11-17 09:07:14.073412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.401 [2024-11-17 09:07:14.073448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:37.401 [2024-11-17 09:07:14.087932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e95a0 00:16:37.402 [2024-11-17 09:07:14.088437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.088470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.103001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e9168 00:16:37.402 [2024-11-17 09:07:14.103488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.103532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.117503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e8d30 00:16:37.402 [2024-11-17 09:07:14.118071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.118102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.131884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e88f8 00:16:37.402 [2024-11-17 09:07:14.132299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.132339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.146246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e84c0 00:16:37.402 [2024-11-17 09:07:14.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.146923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.160751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e8088 00:16:37.402 [2024-11-17 09:07:14.161352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.161381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.175349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e7c50 00:16:37.402 [2024-11-17 09:07:14.175935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.175980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.189762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e7818 00:16:37.402 [2024-11-17 09:07:14.190375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.190405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.204278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e73e0 00:16:37.402 [2024-11-17 09:07:14.204660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.218502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e6fa8 00:16:37.402 [2024-11-17 09:07:14.218949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.218996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.232871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e6b70 00:16:37.402 [2024-11-17 09:07:14.233424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.233455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.247426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e6738 00:16:37.402 [2024-11-17 09:07:14.247843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.263075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e6300 00:16:37.402 [2024-11-17 09:07:14.263438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.263463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.277495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e5ec8 00:16:37.402 [2024-11-17 09:07:14.277922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.277955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.293688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e5a90 00:16:37.402 [2024-11-17 09:07:14.294092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.294144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.308566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e5658 00:16:37.402 [2024-11-17 09:07:14.309077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.309124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:37.402 [2024-11-17 09:07:14.323359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e5220 00:16:37.402 [2024-11-17 09:07:14.323698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.402 [2024-11-17 09:07:14.323720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.338785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e4de8 00:16:37.661 [2024-11-17 09:07:14.339102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.339128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.353148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e49b0 00:16:37.661 [2024-11-17 09:07:14.353435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.353460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.367417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e4578 00:16:37.661 [2024-11-17 09:07:14.367741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.367768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.381510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e4140 00:16:37.661 [2024-11-17 09:07:14.381881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.381914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.395993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e3d08 00:16:37.661 [2024-11-17 09:07:14.396283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.396308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.410327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e38d0 00:16:37.661 [2024-11-17 09:07:14.410793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.410817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.424730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e3498 00:16:37.661 [2024-11-17 09:07:14.424974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.424998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.438856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e3060 00:16:37.661 [2024-11-17 09:07:14.439091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.439115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.453086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e2c28 00:16:37.661 [2024-11-17 09:07:14.453314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.453334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.467297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e27f0 00:16:37.661 [2024-11-17 09:07:14.467511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.467531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.481464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e23b8 00:16:37.661 [2024-11-17 09:07:14.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.481783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.497548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e1f80 00:16:37.661 [2024-11-17 09:07:14.497817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.497843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.514580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e1b48 00:16:37.661 [2024-11-17 09:07:14.515009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.515039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.531722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e1710 00:16:37.661 [2024-11-17 09:07:14.531911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.531932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.548244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e12d8 00:16:37.661 [2024-11-17 09:07:14.548428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.548449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:37.661 [2024-11-17 09:07:14.565060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e0ea0 00:16:37.661 [2024-11-17 09:07:14.565254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.661 [2024-11-17 09:07:14.565276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:37.662 [2024-11-17 09:07:14.580034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e0a68 00:16:37.662 [2024-11-17 09:07:14.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.662 [2024-11-17 09:07:14.580234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.595378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e0630 00:16:37.921 [2024-11-17 09:07:14.595514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.595534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.609704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190e01f8 00:16:37.921 [2024-11-17 09:07:14.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.610080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.625542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190dfdc0 00:16:37.921 [2024-11-17 09:07:14.625742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.625765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.641754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190df988 00:16:37.921 [2024-11-17 09:07:14.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.641901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.656994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190df550 00:16:37.921 [2024-11-17 09:07:14.657108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.657129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.671181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190df118 00:16:37.921 [2024-11-17 09:07:14.671286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.671306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.685243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190dece0 00:16:37.921 [2024-11-17 09:07:14.685341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.699514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190de8a8 00:16:37.921 [2024-11-17 09:07:14.699615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.699652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.713537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190de038 00:16:37.921 [2024-11-17 09:07:14.713654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.713675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.733500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190de038 00:16:37.921 [2024-11-17 09:07:14.734908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.734942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.748727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190de470 00:16:37.921 [2024-11-17 09:07:14.750161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.750344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.764207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190de8a8 00:16:37.921 [2024-11-17 09:07:14.765746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.765797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:37.921 [2024-11-17 09:07:14.780373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcdc0) with pdu=0x2000190dece0 00:16:37.921 [2024-11-17 09:07:14.781676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.921 [2024-11-17 09:07:14.781778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:37.921 00:16:37.921 Latency(us) 00:16:37.921 [2024-11-17T09:07:14.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.921 [2024-11-17T09:07:14.851Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.921 nvme0n1 : 2.00 16752.55 65.44 0.00 0.00 7634.30 6791.91 23116.33 00:16:37.921 [2024-11-17T09:07:14.851Z] =================================================================================================================== 00:16:37.921 [2024-11-17T09:07:14.851Z] Total : 16752.55 65.44 0.00 0.00 7634.30 6791.91 23116.33 00:16:37.921 0 00:16:37.921 09:07:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:37.921 09:07:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:37.921 09:07:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:37.921 | .driver_specific 00:16:37.921 | .nvme_error 00:16:37.921 | .status_code 00:16:37.921 | .command_transient_transport_error' 00:16:37.921 09:07:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:38.180 09:07:15 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:16:38.180 09:07:15 -- host/digest.sh@73 -- # killprocess 72128 00:16:38.180 09:07:15 -- common/autotest_common.sh@936 -- # '[' -z 72128 ']' 00:16:38.180 09:07:15 -- common/autotest_common.sh@940 -- # kill -0 72128 00:16:38.180 09:07:15 -- common/autotest_common.sh@941 -- # uname 00:16:38.180 09:07:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.180 09:07:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72128 00:16:38.439 killing process with pid 72128 00:16:38.439 Received shutdown signal, test time was about 2.000000 seconds 00:16:38.439 00:16:38.439 Latency(us) 00:16:38.439 [2024-11-17T09:07:15.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.439 [2024-11-17T09:07:15.369Z] =================================================================================================================== 00:16:38.439 [2024-11-17T09:07:15.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.439 09:07:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:38.439 09:07:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:38.439 09:07:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72128' 00:16:38.439 09:07:15 -- common/autotest_common.sh@955 -- # kill 72128 00:16:38.439 09:07:15 -- common/autotest_common.sh@960 -- # wait 72128 00:16:38.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:38.439 09:07:15 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:38.439 09:07:15 -- host/digest.sh@54 -- # local rw bs qd 00:16:38.439 09:07:15 -- host/digest.sh@56 -- # rw=randwrite 00:16:38.439 09:07:15 -- host/digest.sh@56 -- # bs=131072 00:16:38.439 09:07:15 -- host/digest.sh@56 -- # qd=16 00:16:38.439 09:07:15 -- host/digest.sh@58 -- # bperfpid=72188 00:16:38.439 09:07:15 -- host/digest.sh@60 -- # waitforlisten 72188 /var/tmp/bperf.sock 00:16:38.439 09:07:15 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:38.439 09:07:15 -- common/autotest_common.sh@829 -- # '[' -z 72188 ']' 00:16:38.439 09:07:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:38.439 09:07:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.439 09:07:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:38.439 09:07:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.440 09:07:15 -- common/autotest_common.sh@10 -- # set +x 00:16:38.440 [2024-11-17 09:07:15.358414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.440 [2024-11-17 09:07:15.358759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72188 ] 00:16:38.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:38.440 Zero copy mechanism will not be used. 00:16:38.698 [2024-11-17 09:07:15.498624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.698 [2024-11-17 09:07:15.552624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.634 09:07:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.634 09:07:16 -- common/autotest_common.sh@862 -- # return 0 00:16:39.634 09:07:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:39.634 09:07:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:39.634 09:07:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:39.634 09:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.634 09:07:16 -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 09:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.634 09:07:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.634 09:07:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.892 nvme0n1 00:16:40.150 09:07:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:40.150 09:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.150 09:07:16 -- common/autotest_common.sh@10 -- # set +x 00:16:40.150 09:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.150 09:07:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:40.151 09:07:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:40.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:40.151 Zero copy mechanism will not be used. 00:16:40.151 Running I/O for 2 seconds... 00:16:40.151 [2024-11-17 09:07:16.967143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.967513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.967544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:16.972813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.973130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.973163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:16.978507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.979010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.979059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:16.984504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.984815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.984843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:16.990094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.990593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.990638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:16.995637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:16.995935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:16.995996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.001281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.001753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.001789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.006698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.007068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.007102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.011802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.012127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.012160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.016665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.016948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.016975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.021274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.021818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.021853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.026295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.026663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.031075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.031355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.031382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.035907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.036210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.036237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.040613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.040901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.040928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.045271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.045806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.050346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.050662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.050699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.055298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.055591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.055628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.060273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.060549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.060576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.065018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.065300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.065328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.069643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.069982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.070011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.151 [2024-11-17 09:07:17.074492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.151 [2024-11-17 09:07:17.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.151 [2024-11-17 09:07:17.074882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.079687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.080008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.080038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.084738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.085031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.085059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.089437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.089964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.094448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.094799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.094833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.099410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.099735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.099764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.104268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.104571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.104608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.109086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.109410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.114077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.114422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.114451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.118878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.119177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.119204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.123424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.123748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.123781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.128008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.128280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.128307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.132595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.411 [2024-11-17 09:07:17.132955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.411 [2024-11-17 09:07:17.137149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.411 [2024-11-17 09:07:17.137424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.137452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.141687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.142007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.142063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.146411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.146742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.146771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.151077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.151367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.151395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.155791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.156067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.156093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.160353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.160649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.160676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.165050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.165344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.165371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.169788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.170123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.170149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.174493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.174862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.179143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.179416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.179443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.183745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.184018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.184044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.188265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.188541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.188567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.192752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.193025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.193051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.197277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.197785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.197820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.202226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.202500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.202526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.206909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.207223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.207251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.211724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.212013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.212041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.216564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.216931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.221443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.221963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.221988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.226546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.226926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.226991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.231718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.231994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.232023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.236690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.236981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.237010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.241597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.242100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.242164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.246899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.247227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.247255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.252014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.252295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.252322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.256901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.257249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.257277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.262237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.262581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.412 [2024-11-17 09:07:17.262617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.412 [2024-11-17 09:07:17.266841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.412 [2024-11-17 09:07:17.267132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.267159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.271404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.271710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.271737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.276002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.276277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.276304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.280615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.280898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.280925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.285087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.285363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.285389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.289613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.289945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.289973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.294374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.294679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.294716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.298940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.299227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.303563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.303875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.303901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.308398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.308936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.313676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.314105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.318888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.319223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.324289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.324830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.324866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.329753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.330158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.330186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.413 [2024-11-17 09:07:17.335069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.413 [2024-11-17 09:07:17.335430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.413 [2024-11-17 09:07:17.335459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.340371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.340917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.345580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.345925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.345955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.350456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.350800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.350835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.355273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.355554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.355582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.359931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.360229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.360259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.364807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.365089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.365117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.369400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.369751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.369781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.374316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.374634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.378967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.383629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.383910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.383937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.388350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.388820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.393249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.393532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.393559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.398178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.398461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.398488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.403084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.403376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.403405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.408004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.408314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.674 [2024-11-17 09:07:17.412810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.674 [2024-11-17 09:07:17.413098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.674 [2024-11-17 09:07:17.413144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.417522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.417896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.417927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.422261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.422587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.422646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.427260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.427575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.432448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.432937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.432973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.437543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.437953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.437989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.442712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.443043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.443072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.447707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.448036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.448064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.452542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.453060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.453094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.457733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.458075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.458118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.462691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.463039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.463087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.467666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.467969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.467999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.472579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.473106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.477575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.477967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.478008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.482666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.483012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.483046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.487386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.487698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.492006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.492315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.496790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.497091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.497117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.501430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.501771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.501810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.506204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.506484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.506511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.511075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.511357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.511384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.515904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.516223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.516252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.521378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.521785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.521816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.526158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.526436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.526463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.530809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.531111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.531140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.535579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.535975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.540281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.540603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.675 [2024-11-17 09:07:17.544860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.675 [2024-11-17 09:07:17.545158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.675 [2024-11-17 09:07:17.545185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.549790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.550152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.554427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.554755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.554787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.559104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.559430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.559457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.563928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.564210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.568625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.568944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.573218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.573670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.573702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.578281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.578585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.578620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.583094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.583377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.583404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.587786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.588139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.588190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.592595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.592969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.593009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.676 [2024-11-17 09:07:17.597773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.676 [2024-11-17 09:07:17.598159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.676 [2024-11-17 09:07:17.598186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.602860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.603138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.603164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.607710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.608011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.612282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.612556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.612582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.616872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.617145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.617171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.621448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.621821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.621856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.626171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.626451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.626478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.630907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.631197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.631224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.635579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.635949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.640110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.640383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.640410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.644747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.645025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.645052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.649221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.649495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.649521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.653933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.654278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.658568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.658933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.663413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.663707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.663734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.668030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.668301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.668328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.672586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.672870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.672896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.936 [2024-11-17 09:07:17.677261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.936 [2024-11-17 09:07:17.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.936 [2024-11-17 09:07:17.677630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.682223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.682530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.687306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.687589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.687657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.692316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.692823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.692873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.697890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.698246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.698272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.702944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.703294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.708048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.708326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.708353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.712949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.713287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.713314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.718170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.718445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.723136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.723428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.723450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.728062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.728334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.732901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.733191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.733218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.737500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.737894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.742696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.742991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.743019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.748209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.748681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.748727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.753630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.753996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.758682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.759208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.759390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.763952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.764427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.764604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.769212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.769769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.769956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.774821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.775373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.775587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.780693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.781174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.781355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.786083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.786770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.791454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.791981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.792190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.796745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.797235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.797380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.801873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.802383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.802549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.807137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.807616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.812366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.813005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.937 [2024-11-17 09:07:17.817477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.937 [2024-11-17 09:07:17.818042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.937 [2024-11-17 09:07:17.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.822749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.823214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.823415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.828013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.828475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.828690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.833386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.833937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.834194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.838951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.839428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.844231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.844718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.844953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.849633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.850166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.850348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.855038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.855491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.855683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.938 [2024-11-17 09:07:17.860467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:40.938 [2024-11-17 09:07:17.861006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.938 [2024-11-17 09:07:17.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.865538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.865951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.865987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.870879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.871169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.871196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.875673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.875963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.875990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.880376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.880687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.885135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.885425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.885453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.889816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.890170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.890197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.894596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.894928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.894956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.899274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.197 [2024-11-17 09:07:17.899556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.197 [2024-11-17 09:07:17.899583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.197 [2024-11-17 09:07:17.904092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.904374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.904400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.908789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.909113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.913648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.914160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.914224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.918621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.919004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.919060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.923464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.923782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.923820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.928290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.928570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.928606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.933053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.933357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.933384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.937828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.938161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.942554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.942958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.943014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.947527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.947892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.952371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.952664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.952693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.957187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.957502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.961897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.962205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.962232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.966618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.966996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.967097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.971546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.971843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.971865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.976277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.976555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.982092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.982413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.982443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.987364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.987731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.987771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.992937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.993272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.993302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:17.998512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:17.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:17.999110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.004132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.004453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.009250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.009621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.009676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.014542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.015086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.015145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.019823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.020140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.020201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.024536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.024887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.024919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.029390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.029755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.029796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.034666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.198 [2024-11-17 09:07:18.035170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.198 [2024-11-17 09:07:18.035206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.198 [2024-11-17 09:07:18.040088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.040457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.044898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.045224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.045320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.049821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.050225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.050276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.054780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.055117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.055156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.059513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.059868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.059901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.064407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.064701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.064778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.069076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.069358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.069418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.073750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.074112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.074165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.078448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.078955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.079003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.083403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.083770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.083808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.088241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.088525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.088552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.092973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.093258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.093335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.097783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.098159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.098202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.102500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.103008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.103087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.107533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.107916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.107955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.112279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.112595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.112667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.117007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.117287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.117315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.199 [2024-11-17 09:07:18.122143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.199 [2024-11-17 09:07:18.122441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.199 [2024-11-17 09:07:18.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.127148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.127440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.127468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.132156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.132456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.136954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.137236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.137263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.141614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.141999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.146366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.146905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.146939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.151356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.151670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.151696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.156221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.156501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.156529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.160919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.161201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.161228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.165532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.165926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.170334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.170823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.170872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.175255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.175556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.175583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.180081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.180363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.180390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.184763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.185039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.185065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.189312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.189600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.189639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.193953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.194263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.459 [2024-11-17 09:07:18.194288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.459 [2024-11-17 09:07:18.198547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.459 [2024-11-17 09:07:18.199078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.199139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.203552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.203877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.203904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.208089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.208365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.208391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.212664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.212936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.212968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.217447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.217828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.222447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.223001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.223049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.227513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.227892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.232409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.232747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.232775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.237832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.238198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.238226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.242930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.243237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.243265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.247858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.248145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.248171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.252661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.252949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.252976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.257308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.257592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.257627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.262093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.262387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.262414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.266867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.267166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.267193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.271516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.271851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.271884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.276276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.276574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.276610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.280893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.281191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.281218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.285484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.285869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.285904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.290502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.291079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.296072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.296397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.296424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.300750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.301034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.301060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.305341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.305651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.310124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.310405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.310432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.314822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.315122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.315149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.319497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.319829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.319862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.324164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.324441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.460 [2024-11-17 09:07:18.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.460 [2024-11-17 09:07:18.328780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.460 [2024-11-17 09:07:18.329060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.333379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.333757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.338572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.339011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.339048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.344216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.344567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.344588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.349761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.350078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.350130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.355165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.355465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.355510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.360199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.360496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.365187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.365520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.365560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.370148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.370486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.370523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.374976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.375327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.375364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.379634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.461 [2024-11-17 09:07:18.379981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.461 [2024-11-17 09:07:18.380018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.461 [2024-11-17 09:07:18.384734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.385136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.389370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.389772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.389813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.394391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.394717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.394791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.399048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.399388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.399426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.403780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.404121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.404160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.408464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.408861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.413182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.413506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.417882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.418257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.422560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.422921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.422959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.427394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.427734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.432166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.432501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.436865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.437211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.437249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.441470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.441861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.441902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.446283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.446608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.446660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.450916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.451253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.455576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.455921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.455960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.460255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.460625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.460673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.465262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.465596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.465655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.470495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.470842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.470881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.475532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.475933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.475973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.480757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.481150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.481185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.486241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.486612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.486667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.491556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.491936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.491977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.496787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.497147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.501793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.502132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.721 [2024-11-17 09:07:18.502172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.721 [2024-11-17 09:07:18.506857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.721 [2024-11-17 09:07:18.507203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.507244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.511811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.512168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.512212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.516559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.516931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.516972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.521467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.521841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.521884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.526347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.531228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.531580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.531635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.536176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.536531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.536573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.541076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.541436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.541477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.545996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.546383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.546424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.551387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.551798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.556588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.556950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.556991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.561474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.561870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.561907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.566445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.566813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.566856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.571356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.571737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.571778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.576338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.576729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.581123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.581486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.581527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.585887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.586282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.586323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.590860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.591230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.591272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.596114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.596457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.596498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.601390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.601724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.606680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.606990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.611897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.612274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.612315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.617174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.617537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.617578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.622170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.622512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.622554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.627254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.627643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.632362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.632771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.632812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.637283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.637666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.722 [2024-11-17 09:07:18.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.722 [2024-11-17 09:07:18.642121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.722 [2024-11-17 09:07:18.642491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.723 [2024-11-17 09:07:18.642531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.647575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.647967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.648006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.652701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.653063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.653103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.657766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.658074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.658129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.662464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.662828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.667361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.667721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.667760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.672391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.672783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.982 [2024-11-17 09:07:18.672838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.982 [2024-11-17 09:07:18.677263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.982 [2024-11-17 09:07:18.677615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.677675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.682111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.682494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.687110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.687453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.687501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.692014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.692362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.692402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.696755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.697099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.697140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.701765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.706645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.707000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.707040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.711725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.712069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.712109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.716898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.717273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.717308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.722172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.722528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.722567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.727328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.727663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.727717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.732346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.732726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.737246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.737579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.737646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.742174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.742508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.747156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.747492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.747525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.751823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.752160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.752199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.756767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.757110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.757152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.761999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.762408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.762447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.767101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.767461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.767512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.772145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.772491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.772531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.777282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.777660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.777742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.782512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.782924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.782964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.787325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.787663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.787711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.792234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.792562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.792623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.797018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.797370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.797415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.801908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.802314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.802354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.806896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.807287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.983 [2024-11-17 09:07:18.807327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.983 [2024-11-17 09:07:18.812186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.983 [2024-11-17 09:07:18.812543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.812583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.816903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.817259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.817298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.821616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.821990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.822029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.826373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.826741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.826798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.831175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.831613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.835968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.836312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.836349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.840785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.841120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.841166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.845464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.845835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.845879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.850316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.850673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.850722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.855131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.855502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.855542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.860018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.860363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.860408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.864759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.865117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.865156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.869459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.869843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.869888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.874348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.874710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.874762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.879241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.879600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.879654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.884172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.884537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.884579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.888987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.889357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.893728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.894162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.894201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.898573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.898976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.899015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.984 [2024-11-17 09:07:18.903451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:41.984 [2024-11-17 09:07:18.903840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.984 [2024-11-17 09:07:18.903879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.908629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.909043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.909083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.913547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.913959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.914000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.918366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.918759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.918810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.923232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.923576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.923626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.927909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.928264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.928304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.932741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.933099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.937500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.937895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.937944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.942369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.942816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.947296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.947648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.947675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.952169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.952516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.952556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.243 [2024-11-17 09:07:18.957098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdfcf60) with pdu=0x2000190fef90 00:16:42.243 [2024-11-17 09:07:18.957430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.243 [2024-11-17 09:07:18.957464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.243 00:16:42.243 Latency(us) 00:16:42.243 [2024-11-17T09:07:19.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.243 [2024-11-17T09:07:19.173Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:42.243 nvme0n1 : 2.00 6277.81 784.73 0.00 0.00 2543.29 2010.76 7745.16 00:16:42.243 [2024-11-17T09:07:19.174Z] =================================================================================================================== 00:16:42.244 [2024-11-17T09:07:19.174Z] Total : 6277.81 784.73 0.00 0.00 2543.29 2010.76 7745.16 00:16:42.244 0 00:16:42.244 09:07:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:42.244 09:07:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:42.244 09:07:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:42.244 | .driver_specific 00:16:42.244 | .nvme_error 00:16:42.244 | .status_code 00:16:42.244 | .command_transient_transport_error' 00:16:42.244 09:07:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:42.505 09:07:19 -- host/digest.sh@71 -- # (( 405 > 0 )) 00:16:42.505 09:07:19 -- host/digest.sh@73 -- # killprocess 72188 00:16:42.505 09:07:19 -- common/autotest_common.sh@936 -- # '[' -z 72188 ']' 00:16:42.505 09:07:19 -- common/autotest_common.sh@940 -- # kill -0 72188 00:16:42.505 09:07:19 -- common/autotest_common.sh@941 -- # uname 00:16:42.505 09:07:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.505 09:07:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72188 00:16:42.505 09:07:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.505 09:07:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.505 killing process with pid 72188 00:16:42.505 09:07:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72188' 00:16:42.505 09:07:19 -- common/autotest_common.sh@955 -- # kill 72188 00:16:42.505 Received shutdown signal, test time was about 2.000000 seconds 00:16:42.505 00:16:42.505 Latency(us) 00:16:42.505 [2024-11-17T09:07:19.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.505 [2024-11-17T09:07:19.435Z] =================================================================================================================== 00:16:42.505 [2024-11-17T09:07:19.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.505 09:07:19 -- common/autotest_common.sh@960 -- # wait 72188 00:16:42.769 09:07:19 -- host/digest.sh@115 -- # killprocess 71983 00:16:42.769 09:07:19 -- common/autotest_common.sh@936 -- # '[' -z 71983 ']' 00:16:42.769 09:07:19 -- common/autotest_common.sh@940 -- # kill -0 71983 00:16:42.769 09:07:19 -- common/autotest_common.sh@941 -- # uname 00:16:42.769 09:07:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.769 09:07:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71983 00:16:42.769 09:07:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.769 09:07:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.769 killing process with pid 71983 00:16:42.769 09:07:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71983' 00:16:42.769 09:07:19 -- common/autotest_common.sh@955 -- # kill 71983 00:16:42.769 09:07:19 -- common/autotest_common.sh@960 -- # wait 71983 00:16:42.769 00:16:42.769 real 0m17.599s 00:16:42.769 user 0m34.873s 00:16:42.769 sys 0m4.534s 00:16:42.769 09:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.769 09:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:42.769 ************************************ 00:16:42.769 END TEST nvmf_digest_error 00:16:42.769 ************************************ 00:16:43.028 09:07:19 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:43.028 09:07:19 -- host/digest.sh@139 -- # nvmftestfini 00:16:43.028 09:07:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.028 09:07:19 -- nvmf/common.sh@116 -- # sync 00:16:43.028 09:07:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:43.028 09:07:19 -- nvmf/common.sh@119 -- # set +e 00:16:43.028 09:07:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.028 09:07:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:43.028 rmmod nvme_tcp 00:16:43.028 rmmod nvme_fabrics 00:16:43.028 rmmod nvme_keyring 00:16:43.028 09:07:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:43.028 09:07:19 -- nvmf/common.sh@123 -- # set -e 00:16:43.028 09:07:19 -- nvmf/common.sh@124 -- # return 0 00:16:43.028 09:07:19 -- nvmf/common.sh@477 -- # '[' -n 71983 ']' 00:16:43.028 09:07:19 -- nvmf/common.sh@478 -- # killprocess 71983 00:16:43.028 09:07:19 -- common/autotest_common.sh@936 -- # '[' -z 71983 ']' 00:16:43.028 09:07:19 -- common/autotest_common.sh@940 -- # kill -0 71983 00:16:43.028 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71983) - No such process 00:16:43.028 Process with pid 71983 is not found 00:16:43.028 09:07:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71983 is not found' 00:16:43.028 09:07:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:43.028 09:07:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:43.028 09:07:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:43.028 09:07:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.028 09:07:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:43.028 09:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.028 09:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.028 09:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.028 09:07:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:43.028 00:16:43.028 real 0m34.585s 00:16:43.028 user 1m6.095s 00:16:43.028 sys 0m9.166s 00:16:43.028 09:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:43.028 09:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:43.028 ************************************ 00:16:43.028 END TEST nvmf_digest 00:16:43.028 ************************************ 00:16:43.028 09:07:19 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:43.028 09:07:19 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:43.028 09:07:19 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:43.028 09:07:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.028 09:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.028 09:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:43.028 ************************************ 00:16:43.028 START TEST nvmf_multipath 00:16:43.028 ************************************ 00:16:43.028 09:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:43.288 * Looking for test storage... 00:16:43.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.288 09:07:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:43.288 09:07:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:43.288 09:07:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:43.288 09:07:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:43.288 09:07:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:43.288 09:07:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:43.288 09:07:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:43.288 09:07:20 -- scripts/common.sh@335 -- # IFS=.-: 00:16:43.288 09:07:20 -- scripts/common.sh@335 -- # read -ra ver1 00:16:43.288 09:07:20 -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.288 09:07:20 -- scripts/common.sh@336 -- # read -ra ver2 00:16:43.288 09:07:20 -- scripts/common.sh@337 -- # local 'op=<' 00:16:43.288 09:07:20 -- scripts/common.sh@339 -- # ver1_l=2 00:16:43.288 09:07:20 -- scripts/common.sh@340 -- # ver2_l=1 00:16:43.288 09:07:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:43.288 09:07:20 -- scripts/common.sh@343 -- # case "$op" in 00:16:43.288 09:07:20 -- scripts/common.sh@344 -- # : 1 00:16:43.288 09:07:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:43.288 09:07:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.288 09:07:20 -- scripts/common.sh@364 -- # decimal 1 00:16:43.288 09:07:20 -- scripts/common.sh@352 -- # local d=1 00:16:43.288 09:07:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.288 09:07:20 -- scripts/common.sh@354 -- # echo 1 00:16:43.288 09:07:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:43.288 09:07:20 -- scripts/common.sh@365 -- # decimal 2 00:16:43.288 09:07:20 -- scripts/common.sh@352 -- # local d=2 00:16:43.288 09:07:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.288 09:07:20 -- scripts/common.sh@354 -- # echo 2 00:16:43.288 09:07:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:43.288 09:07:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:43.288 09:07:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:43.288 09:07:20 -- scripts/common.sh@367 -- # return 0 00:16:43.288 09:07:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.288 09:07:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.288 --rc genhtml_branch_coverage=1 00:16:43.288 --rc genhtml_function_coverage=1 00:16:43.288 --rc genhtml_legend=1 00:16:43.288 --rc geninfo_all_blocks=1 00:16:43.288 --rc geninfo_unexecuted_blocks=1 00:16:43.288 00:16:43.288 ' 00:16:43.288 09:07:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.288 --rc genhtml_branch_coverage=1 00:16:43.288 --rc genhtml_function_coverage=1 00:16:43.288 --rc genhtml_legend=1 00:16:43.288 --rc geninfo_all_blocks=1 00:16:43.288 --rc geninfo_unexecuted_blocks=1 00:16:43.288 00:16:43.288 ' 00:16:43.288 09:07:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.288 --rc genhtml_branch_coverage=1 00:16:43.288 --rc genhtml_function_coverage=1 00:16:43.288 --rc genhtml_legend=1 00:16:43.288 --rc geninfo_all_blocks=1 00:16:43.288 --rc geninfo_unexecuted_blocks=1 00:16:43.288 00:16:43.288 ' 00:16:43.288 09:07:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.288 --rc genhtml_branch_coverage=1 00:16:43.288 --rc genhtml_function_coverage=1 00:16:43.288 --rc genhtml_legend=1 00:16:43.288 --rc geninfo_all_blocks=1 00:16:43.288 --rc geninfo_unexecuted_blocks=1 00:16:43.288 00:16:43.288 ' 00:16:43.288 09:07:20 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.288 09:07:20 -- nvmf/common.sh@7 -- # uname -s 00:16:43.288 09:07:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.288 09:07:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.288 09:07:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.288 09:07:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.288 09:07:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.288 09:07:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.288 09:07:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.288 09:07:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.288 09:07:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.288 09:07:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.288 09:07:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:16:43.288 09:07:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:16:43.288 09:07:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.288 09:07:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.288 09:07:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.288 09:07:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.288 09:07:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.288 09:07:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.288 09:07:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.288 09:07:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.288 09:07:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.288 09:07:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.288 09:07:20 -- paths/export.sh@5 -- # export PATH 00:16:43.289 09:07:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.289 09:07:20 -- nvmf/common.sh@46 -- # : 0 00:16:43.289 09:07:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:43.289 09:07:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:43.289 09:07:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:43.289 09:07:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.289 09:07:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.289 09:07:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:43.289 09:07:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:43.289 09:07:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:43.289 09:07:20 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.289 09:07:20 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.289 09:07:20 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.289 09:07:20 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:43.289 09:07:20 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.289 09:07:20 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:43.289 09:07:20 -- host/multipath.sh@30 -- # nvmftestinit 00:16:43.289 09:07:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:43.289 09:07:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.289 09:07:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.289 09:07:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.289 09:07:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.289 09:07:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.289 09:07:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.289 09:07:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.289 09:07:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:43.289 09:07:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:43.289 09:07:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:43.289 09:07:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:43.289 09:07:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:43.289 09:07:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:43.289 09:07:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.289 09:07:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.289 09:07:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.289 09:07:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:43.289 09:07:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.289 09:07:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.289 09:07:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.289 09:07:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.289 09:07:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.289 09:07:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.289 09:07:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.289 09:07:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.289 09:07:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:43.289 09:07:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:43.289 Cannot find device "nvmf_tgt_br" 00:16:43.289 09:07:20 -- nvmf/common.sh@154 -- # true 00:16:43.289 09:07:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.289 Cannot find device "nvmf_tgt_br2" 00:16:43.289 09:07:20 -- nvmf/common.sh@155 -- # true 00:16:43.289 09:07:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:43.289 09:07:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:43.289 Cannot find device "nvmf_tgt_br" 00:16:43.289 09:07:20 -- nvmf/common.sh@157 -- # true 00:16:43.289 09:07:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:43.548 Cannot find device "nvmf_tgt_br2" 00:16:43.548 09:07:20 -- nvmf/common.sh@158 -- # true 00:16:43.548 09:07:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:43.548 09:07:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:43.548 09:07:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.548 09:07:20 -- nvmf/common.sh@161 -- # true 00:16:43.548 09:07:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.548 09:07:20 -- nvmf/common.sh@162 -- # true 00:16:43.548 09:07:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.548 09:07:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.548 09:07:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.548 09:07:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.548 09:07:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.548 09:07:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.548 09:07:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.548 09:07:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.548 09:07:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.548 09:07:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.548 09:07:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.548 09:07:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.548 09:07:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.548 09:07:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.548 09:07:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.548 09:07:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.548 09:07:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.548 09:07:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.548 09:07:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.548 09:07:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.548 09:07:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.548 09:07:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.548 09:07:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.548 09:07:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:43.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:16:43.548 00:16:43.548 --- 10.0.0.2 ping statistics --- 00:16:43.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.548 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:43.548 09:07:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:43.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:43.807 00:16:43.807 --- 10.0.0.3 ping statistics --- 00:16:43.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.807 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:43.807 09:07:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:43.807 00:16:43.807 --- 10.0.0.1 ping statistics --- 00:16:43.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.807 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:43.807 09:07:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.807 09:07:20 -- nvmf/common.sh@421 -- # return 0 00:16:43.807 09:07:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.807 09:07:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.807 09:07:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.807 09:07:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.807 09:07:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.807 09:07:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.807 09:07:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.807 09:07:20 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:43.807 09:07:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.807 09:07:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.807 09:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:43.807 09:07:20 -- nvmf/common.sh@469 -- # nvmfpid=72461 00:16:43.807 09:07:20 -- nvmf/common.sh@470 -- # waitforlisten 72461 00:16:43.807 09:07:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:43.807 09:07:20 -- common/autotest_common.sh@829 -- # '[' -z 72461 ']' 00:16:43.807 09:07:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.807 09:07:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.807 09:07:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.807 09:07:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.807 09:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:43.807 [2024-11-17 09:07:20.573645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:43.807 [2024-11-17 09:07:20.573791] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.807 [2024-11-17 09:07:20.714973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.066 [2024-11-17 09:07:20.783252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.066 [2024-11-17 09:07:20.783410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.066 [2024-11-17 09:07:20.783425] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.066 [2024-11-17 09:07:20.783442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.066 [2024-11-17 09:07:20.783576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.066 [2024-11-17 09:07:20.783969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.003 09:07:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.003 09:07:21 -- common/autotest_common.sh@862 -- # return 0 00:16:45.003 09:07:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.003 09:07:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.003 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:45.003 09:07:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.003 09:07:21 -- host/multipath.sh@33 -- # nvmfapp_pid=72461 00:16:45.003 09:07:21 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.003 [2024-11-17 09:07:21.838460] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.003 09:07:21 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:45.261 Malloc0 00:16:45.261 09:07:22 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:45.520 09:07:22 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.779 09:07:22 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.037 [2024-11-17 09:07:22.820473] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.037 09:07:22 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:46.295 [2024-11-17 09:07:23.088685] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:46.295 09:07:23 -- host/multipath.sh@44 -- # bdevperf_pid=72517 00:16:46.295 09:07:23 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:46.295 09:07:23 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.295 09:07:23 -- host/multipath.sh@47 -- # waitforlisten 72517 /var/tmp/bdevperf.sock 00:16:46.295 09:07:23 -- common/autotest_common.sh@829 -- # '[' -z 72517 ']' 00:16:46.295 09:07:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.295 09:07:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.295 09:07:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.295 09:07:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.295 09:07:23 -- common/autotest_common.sh@10 -- # set +x 00:16:47.232 09:07:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.232 09:07:24 -- common/autotest_common.sh@862 -- # return 0 00:16:47.232 09:07:24 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:47.492 09:07:24 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:47.752 Nvme0n1 00:16:47.752 09:07:24 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:48.320 Nvme0n1 00:16:48.320 09:07:24 -- host/multipath.sh@78 -- # sleep 1 00:16:48.320 09:07:24 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:49.257 09:07:25 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:49.257 09:07:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:49.516 09:07:26 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:49.775 09:07:26 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:49.775 09:07:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:49.775 09:07:26 -- host/multipath.sh@65 -- # dtrace_pid=72562 00:16:49.775 09:07:26 -- host/multipath.sh@66 -- # sleep 6 00:16:56.342 09:07:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:56.342 09:07:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:56.342 09:07:32 -- host/multipath.sh@67 -- # active_port=4421 00:16:56.342 09:07:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:56.342 Attaching 4 probes... 00:16:56.342 @path[10.0.0.2, 4421]: 19285 00:16:56.342 @path[10.0.0.2, 4421]: 19522 00:16:56.342 @path[10.0.0.2, 4421]: 19417 00:16:56.342 @path[10.0.0.2, 4421]: 19327 00:16:56.342 @path[10.0.0.2, 4421]: 19602 00:16:56.342 09:07:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:56.342 09:07:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:56.342 09:07:32 -- host/multipath.sh@69 -- # sed -n 1p 00:16:56.342 09:07:32 -- host/multipath.sh@69 -- # port=4421 00:16:56.342 09:07:32 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:56.342 09:07:32 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:56.342 09:07:32 -- host/multipath.sh@72 -- # kill 72562 00:16:56.342 09:07:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:56.342 09:07:32 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:56.342 09:07:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:56.342 09:07:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:56.611 09:07:33 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:56.611 09:07:33 -- host/multipath.sh@65 -- # dtrace_pid=72683 00:16:56.611 09:07:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:56.611 09:07:33 -- host/multipath.sh@66 -- # sleep 6 00:17:03.192 09:07:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:03.192 09:07:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:03.192 09:07:39 -- host/multipath.sh@67 -- # active_port=4420 00:17:03.192 09:07:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:03.192 Attaching 4 probes... 00:17:03.192 @path[10.0.0.2, 4420]: 19384 00:17:03.192 @path[10.0.0.2, 4420]: 20084 00:17:03.192 @path[10.0.0.2, 4420]: 20143 00:17:03.192 @path[10.0.0.2, 4420]: 20106 00:17:03.192 @path[10.0.0.2, 4420]: 19706 00:17:03.192 09:07:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:03.192 09:07:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:03.192 09:07:39 -- host/multipath.sh@69 -- # sed -n 1p 00:17:03.192 09:07:39 -- host/multipath.sh@69 -- # port=4420 00:17:03.192 09:07:39 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:03.192 09:07:39 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:03.192 09:07:39 -- host/multipath.sh@72 -- # kill 72683 00:17:03.192 09:07:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:03.192 09:07:39 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:03.192 09:07:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:03.192 09:07:39 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:03.450 09:07:40 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:03.450 09:07:40 -- host/multipath.sh@65 -- # dtrace_pid=72802 00:17:03.450 09:07:40 -- host/multipath.sh@66 -- # sleep 6 00:17:03.450 09:07:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:10.011 09:07:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:10.011 09:07:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:10.011 09:07:46 -- host/multipath.sh@67 -- # active_port=4421 00:17:10.011 09:07:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:10.011 Attaching 4 probes... 00:17:10.011 @path[10.0.0.2, 4421]: 15688 00:17:10.011 @path[10.0.0.2, 4421]: 19020 00:17:10.011 @path[10.0.0.2, 4421]: 19165 00:17:10.011 @path[10.0.0.2, 4421]: 19179 00:17:10.011 @path[10.0.0.2, 4421]: 19034 00:17:10.011 09:07:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:10.011 09:07:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:10.011 09:07:46 -- host/multipath.sh@69 -- # sed -n 1p 00:17:10.012 09:07:46 -- host/multipath.sh@69 -- # port=4421 00:17:10.012 09:07:46 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:10.012 09:07:46 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:10.012 09:07:46 -- host/multipath.sh@72 -- # kill 72802 00:17:10.012 09:07:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:10.012 09:07:46 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:10.012 09:07:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:10.012 09:07:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:10.270 09:07:47 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:10.270 09:07:47 -- host/multipath.sh@65 -- # dtrace_pid=72914 00:17:10.270 09:07:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:10.270 09:07:47 -- host/multipath.sh@66 -- # sleep 6 00:17:16.875 09:07:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:16.875 09:07:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:16.875 09:07:53 -- host/multipath.sh@67 -- # active_port= 00:17:16.875 09:07:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.875 Attaching 4 probes... 00:17:16.875 00:17:16.875 00:17:16.875 00:17:16.875 00:17:16.875 00:17:16.875 09:07:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:16.875 09:07:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:16.875 09:07:53 -- host/multipath.sh@69 -- # sed -n 1p 00:17:16.875 09:07:53 -- host/multipath.sh@69 -- # port= 00:17:16.875 09:07:53 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:16.875 09:07:53 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:16.875 09:07:53 -- host/multipath.sh@72 -- # kill 72914 00:17:16.875 09:07:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.876 09:07:53 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:16.876 09:07:53 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:16.876 09:07:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:16.876 09:07:53 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:16.876 09:07:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.876 09:07:53 -- host/multipath.sh@65 -- # dtrace_pid=73033 00:17:16.876 09:07:53 -- host/multipath.sh@66 -- # sleep 6 00:17:23.440 09:07:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:23.440 09:07:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:23.440 09:08:00 -- host/multipath.sh@67 -- # active_port=4421 00:17:23.440 09:08:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.440 Attaching 4 probes... 00:17:23.440 @path[10.0.0.2, 4421]: 18533 00:17:23.440 @path[10.0.0.2, 4421]: 19569 00:17:23.440 @path[10.0.0.2, 4421]: 19303 00:17:23.440 @path[10.0.0.2, 4421]: 19204 00:17:23.440 @path[10.0.0.2, 4421]: 18880 00:17:23.440 09:08:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:23.440 09:08:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:23.440 09:08:00 -- host/multipath.sh@69 -- # sed -n 1p 00:17:23.440 09:08:00 -- host/multipath.sh@69 -- # port=4421 00:17:23.440 09:08:00 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.440 09:08:00 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.440 09:08:00 -- host/multipath.sh@72 -- # kill 73033 00:17:23.440 09:08:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.440 09:08:00 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:23.440 [2024-11-17 09:08:00.364588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.364998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.365006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.440 [2024-11-17 09:08:00.365015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955230 is same with the state(5) to be set 00:17:23.699 09:08:00 -- host/multipath.sh@101 -- # sleep 1 00:17:24.634 09:08:01 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:24.634 09:08:01 -- host/multipath.sh@65 -- # dtrace_pid=73151 00:17:24.634 09:08:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:24.634 09:08:01 -- host/multipath.sh@66 -- # sleep 6 00:17:31.200 09:08:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:31.200 09:08:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:31.200 09:08:07 -- host/multipath.sh@67 -- # active_port=4420 00:17:31.200 09:08:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:31.200 Attaching 4 probes... 00:17:31.200 @path[10.0.0.2, 4420]: 18587 00:17:31.200 @path[10.0.0.2, 4420]: 18997 00:17:31.200 @path[10.0.0.2, 4420]: 18941 00:17:31.200 @path[10.0.0.2, 4420]: 19108 00:17:31.200 @path[10.0.0.2, 4420]: 19431 00:17:31.200 09:08:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:31.200 09:08:07 -- host/multipath.sh@69 -- # sed -n 1p 00:17:31.200 09:08:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:31.200 09:08:07 -- host/multipath.sh@69 -- # port=4420 00:17:31.200 09:08:07 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:31.200 09:08:07 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:31.200 09:08:07 -- host/multipath.sh@72 -- # kill 73151 00:17:31.200 09:08:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:31.200 09:08:07 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:31.200 [2024-11-17 09:08:07.915979] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:31.200 09:08:07 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:31.459 09:08:08 -- host/multipath.sh@111 -- # sleep 6 00:17:38.032 09:08:14 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:38.032 09:08:14 -- host/multipath.sh@65 -- # dtrace_pid=73331 00:17:38.032 09:08:14 -- host/multipath.sh@66 -- # sleep 6 00:17:38.032 09:08:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:43.303 09:08:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:43.303 09:08:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:43.880 09:08:20 -- host/multipath.sh@67 -- # active_port=4421 00:17:43.880 09:08:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.880 Attaching 4 probes... 00:17:43.880 @path[10.0.0.2, 4421]: 18528 00:17:43.880 @path[10.0.0.2, 4421]: 18701 00:17:43.880 @path[10.0.0.2, 4421]: 19451 00:17:43.880 @path[10.0.0.2, 4421]: 19174 00:17:43.880 @path[10.0.0.2, 4421]: 19088 00:17:43.880 09:08:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:43.880 09:08:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:43.880 09:08:20 -- host/multipath.sh@69 -- # sed -n 1p 00:17:43.880 09:08:20 -- host/multipath.sh@69 -- # port=4421 00:17:43.880 09:08:20 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.880 09:08:20 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.880 09:08:20 -- host/multipath.sh@72 -- # kill 73331 00:17:43.880 09:08:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.880 09:08:20 -- host/multipath.sh@114 -- # killprocess 72517 00:17:43.880 09:08:20 -- common/autotest_common.sh@936 -- # '[' -z 72517 ']' 00:17:43.880 09:08:20 -- common/autotest_common.sh@940 -- # kill -0 72517 00:17:43.880 09:08:20 -- common/autotest_common.sh@941 -- # uname 00:17:43.880 09:08:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.880 09:08:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72517 00:17:43.880 09:08:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:43.880 09:08:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:43.880 09:08:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72517' 00:17:43.880 killing process with pid 72517 00:17:43.880 09:08:20 -- common/autotest_common.sh@955 -- # kill 72517 00:17:43.880 09:08:20 -- common/autotest_common.sh@960 -- # wait 72517 00:17:43.880 Connection closed with partial response: 00:17:43.880 00:17:43.880 00:17:43.880 09:08:20 -- host/multipath.sh@116 -- # wait 72517 00:17:43.880 09:08:20 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:43.880 [2024-11-17 09:07:23.154834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.880 [2024-11-17 09:07:23.154942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72517 ] 00:17:43.880 [2024-11-17 09:07:23.288140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.880 [2024-11-17 09:07:23.357369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.880 Running I/O for 90 seconds... 00:17:43.880 [2024-11-17 09:07:33.326851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.326921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.326975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.880 [2024-11-17 09:07:33.327808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.880 [2024-11-17 09:07:33.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.880 [2024-11-17 09:07:33.327868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.327881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.327901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.327915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.327935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.327949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.327968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.328795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.328976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.881 [2024-11-17 09:07:33.329115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:43.881 [2024-11-17 09:07:33.329237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.881 [2024-11-17 09:07:33.329251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.329698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.329982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.329997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.882 [2024-11-17 09:07:33.330708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.882 [2024-11-17 09:07:33.330744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:43.882 [2024-11-17 09:07:33.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.330785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.330821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.330856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.330892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.330926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.330960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.330981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.330995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.331015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.331029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.332420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.332463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.332539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.332800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.332871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:33.332965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.332986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.333035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.333049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.333069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.333083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:33.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:33.333145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:39.880473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:39.880539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.883 [2024-11-17 09:07:39.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.880973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.883 [2024-11-17 09:07:39.880995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:43.883 [2024-11-17 09:07:39.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.884 [2024-11-17 09:07:39.881452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.884 [2024-11-17 09:07:39.881520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.884 [2024-11-17 09:07:39.881603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.884 [2024-11-17 09:07:39.881642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.884 [2024-11-17 09:07:39.881977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.881999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.884 [2024-11-17 09:07:39.882221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:43.884 [2024-11-17 09:07:39.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.882944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.882966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.882980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.883422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.883524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.883559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.885 [2024-11-17 09:07:39.883594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:43.885 [2024-11-17 09:07:39.883662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.885 [2024-11-17 09:07:39.883677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.883711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.883754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.883790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.883824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.883859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.883894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.883929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.883950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.884090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.884367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.884397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.885893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.885960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.885992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.886008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.886053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.886 [2024-11-17 09:07:39.886069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:39.886113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:39.886128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:46.994161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:46.994253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.886 [2024-11-17 09:07:46.994326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.886 [2024-11-17 09:07:46.994348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.994836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.994970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.994994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.995009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.995106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.995494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.887 [2024-11-17 09:07:46.995527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.887 [2024-11-17 09:07:46.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:43.887 [2024-11-17 09:07:46.995721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.995909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.995978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.995998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.996906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.996957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.996986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.997002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.997038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.997052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.997072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.997086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.997107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.888 [2024-11-17 09:07:46.997121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.997141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.888 [2024-11-17 09:07:46.997155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:43.888 [2024-11-17 09:07:46.997176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.997969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.997991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.998006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.998176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.998400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.998413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.999140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.889 [2024-11-17 09:07:46.999169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.999202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.889 [2024-11-17 09:07:46.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:43.889 [2024-11-17 09:07:46.999248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:07:46.999481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:07:46.999686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:07:46.999827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:07:46.999868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:07:46.999896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:07:46.999911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.364876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.890 [2024-11-17 09:08:00.364940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.364961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.890 [2024-11-17 09:08:00.364976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.364991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.890 [2024-11-17 09:08:00.365005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.890 [2024-11-17 09:08:00.365035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecb20 is same with the state(5) to be set 00:17:43.890 [2024-11-17 09:08:00.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:08:00.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.890 [2024-11-17 09:08:00.365834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.890 [2024-11-17 09:08:00.365850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.890 [2024-11-17 09:08:00.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.365880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.365894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.365910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.365924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.365940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.365970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.365984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.366713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.366970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.366984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.367082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.367112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.367142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.891 [2024-11-17 09:08:00.367181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.891 [2024-11-17 09:08:00.367197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.891 [2024-11-17 09:08:00.367211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.367964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.367978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.367992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.368047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.368107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.368227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.368283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.892 [2024-11-17 09:08:00.368311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.892 [2024-11-17 09:08:00.368395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.892 [2024-11-17 09:08:00.368410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.368943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.368980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.368996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.369010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.369054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.369143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.893 [2024-11-17 09:08:00.369201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.893 [2024-11-17 09:08:00.369311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:43.893 [2024-11-17 09:08:00.369377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:43.893 [2024-11-17 09:08:00.369389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75912 len:8 PRP1 0x0 PRP2 0x0 00:17:43.893 [2024-11-17 09:08:00.369403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.893 [2024-11-17 09:08:00.369452] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x250fc50 was disconnected and freed. reset controller. 00:17:43.893 [2024-11-17 09:08:00.370582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.893 [2024-11-17 09:08:00.370637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ecb20 (9): Bad file descriptor 00:17:43.893 [2024-11-17 09:08:00.370975] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.893 [2024-11-17 09:08:00.371054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.893 [2024-11-17 09:08:00.371108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.893 [2024-11-17 09:08:00.371132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ecb20 with addr=10.0.0.2, port=4421 00:17:43.893 [2024-11-17 09:08:00.371149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ecb20 is same with the state(5) to be set 00:17:43.893 [2024-11-17 09:08:00.371184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ecb20 (9): Bad file descriptor 00:17:43.893 [2024-11-17 09:08:00.371216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.893 [2024-11-17 09:08:00.371247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:43.893 [2024-11-17 09:08:00.371262] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.893 [2024-11-17 09:08:00.371293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:43.893 [2024-11-17 09:08:00.371310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.893 [2024-11-17 09:08:10.417742] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.894 Received shutdown signal, test time was about 55.441659 seconds 00:17:43.894 00:17:43.894 Latency(us) 00:17:43.894 [2024-11-17T09:08:20.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.894 [2024-11-17T09:08:20.824Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:43.894 Verification LBA range: start 0x0 length 0x4000 00:17:43.894 Nvme0n1 : 55.44 10965.28 42.83 0.00 0.00 11653.83 467.32 7015926.69 00:17:43.894 [2024-11-17T09:08:20.824Z] =================================================================================================================== 00:17:43.894 [2024-11-17T09:08:20.824Z] Total : 10965.28 42.83 0.00 0.00 11653.83 467.32 7015926.69 00:17:43.894 09:08:20 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.153 09:08:20 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:44.153 09:08:20 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:44.153 09:08:20 -- host/multipath.sh@125 -- # nvmftestfini 00:17:44.153 09:08:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:44.153 09:08:20 -- nvmf/common.sh@116 -- # sync 00:17:44.153 09:08:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:44.153 09:08:20 -- nvmf/common.sh@119 -- # set +e 00:17:44.153 09:08:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:44.153 09:08:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:44.153 rmmod nvme_tcp 00:17:44.153 rmmod nvme_fabrics 00:17:44.153 rmmod nvme_keyring 00:17:44.153 09:08:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:44.153 09:08:21 -- nvmf/common.sh@123 -- # set -e 00:17:44.153 09:08:21 -- nvmf/common.sh@124 -- # return 0 00:17:44.153 09:08:21 -- nvmf/common.sh@477 -- # '[' -n 72461 ']' 00:17:44.153 09:08:21 -- nvmf/common.sh@478 -- # killprocess 72461 00:17:44.153 09:08:21 -- common/autotest_common.sh@936 -- # '[' -z 72461 ']' 00:17:44.153 09:08:21 -- common/autotest_common.sh@940 -- # kill -0 72461 00:17:44.153 09:08:21 -- common/autotest_common.sh@941 -- # uname 00:17:44.153 09:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.153 09:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72461 00:17:44.412 09:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.412 killing process with pid 72461 00:17:44.412 09:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.412 09:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72461' 00:17:44.412 09:08:21 -- common/autotest_common.sh@955 -- # kill 72461 00:17:44.412 09:08:21 -- common/autotest_common.sh@960 -- # wait 72461 00:17:44.412 09:08:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:44.412 09:08:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:44.412 09:08:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:44.412 09:08:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.412 09:08:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:44.412 09:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.412 09:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.412 09:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.412 09:08:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:44.412 ************************************ 00:17:44.412 END TEST nvmf_multipath 00:17:44.412 ************************************ 00:17:44.412 00:17:44.412 real 1m1.372s 00:17:44.412 user 2m49.844s 00:17:44.412 sys 0m18.478s 00:17:44.412 09:08:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:44.412 09:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.673 09:08:21 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:44.673 09:08:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.673 09:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.673 09:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.673 ************************************ 00:17:44.673 START TEST nvmf_timeout 00:17:44.673 ************************************ 00:17:44.673 09:08:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:44.673 * Looking for test storage... 00:17:44.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.673 09:08:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:44.673 09:08:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:44.673 09:08:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:44.673 09:08:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:44.673 09:08:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:44.673 09:08:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:44.673 09:08:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:44.673 09:08:21 -- scripts/common.sh@335 -- # IFS=.-: 00:17:44.673 09:08:21 -- scripts/common.sh@335 -- # read -ra ver1 00:17:44.673 09:08:21 -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.673 09:08:21 -- scripts/common.sh@336 -- # read -ra ver2 00:17:44.673 09:08:21 -- scripts/common.sh@337 -- # local 'op=<' 00:17:44.673 09:08:21 -- scripts/common.sh@339 -- # ver1_l=2 00:17:44.673 09:08:21 -- scripts/common.sh@340 -- # ver2_l=1 00:17:44.673 09:08:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:44.673 09:08:21 -- scripts/common.sh@343 -- # case "$op" in 00:17:44.673 09:08:21 -- scripts/common.sh@344 -- # : 1 00:17:44.673 09:08:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:44.673 09:08:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.673 09:08:21 -- scripts/common.sh@364 -- # decimal 1 00:17:44.673 09:08:21 -- scripts/common.sh@352 -- # local d=1 00:17:44.673 09:08:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.673 09:08:21 -- scripts/common.sh@354 -- # echo 1 00:17:44.673 09:08:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:44.673 09:08:21 -- scripts/common.sh@365 -- # decimal 2 00:17:44.673 09:08:21 -- scripts/common.sh@352 -- # local d=2 00:17:44.673 09:08:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.673 09:08:21 -- scripts/common.sh@354 -- # echo 2 00:17:44.673 09:08:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:44.673 09:08:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:44.673 09:08:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:44.673 09:08:21 -- scripts/common.sh@367 -- # return 0 00:17:44.673 09:08:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.673 09:08:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.673 --rc genhtml_branch_coverage=1 00:17:44.673 --rc genhtml_function_coverage=1 00:17:44.673 --rc genhtml_legend=1 00:17:44.673 --rc geninfo_all_blocks=1 00:17:44.673 --rc geninfo_unexecuted_blocks=1 00:17:44.673 00:17:44.673 ' 00:17:44.673 09:08:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.674 --rc genhtml_branch_coverage=1 00:17:44.674 --rc genhtml_function_coverage=1 00:17:44.674 --rc genhtml_legend=1 00:17:44.674 --rc geninfo_all_blocks=1 00:17:44.674 --rc geninfo_unexecuted_blocks=1 00:17:44.674 00:17:44.674 ' 00:17:44.674 09:08:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:44.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.674 --rc genhtml_branch_coverage=1 00:17:44.674 --rc genhtml_function_coverage=1 00:17:44.674 --rc genhtml_legend=1 00:17:44.674 --rc geninfo_all_blocks=1 00:17:44.674 --rc geninfo_unexecuted_blocks=1 00:17:44.674 00:17:44.674 ' 00:17:44.674 09:08:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:44.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.674 --rc genhtml_branch_coverage=1 00:17:44.674 --rc genhtml_function_coverage=1 00:17:44.674 --rc genhtml_legend=1 00:17:44.674 --rc geninfo_all_blocks=1 00:17:44.674 --rc geninfo_unexecuted_blocks=1 00:17:44.674 00:17:44.674 ' 00:17:44.674 09:08:21 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.674 09:08:21 -- nvmf/common.sh@7 -- # uname -s 00:17:44.674 09:08:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.674 09:08:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.674 09:08:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.674 09:08:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.674 09:08:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.674 09:08:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.674 09:08:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.674 09:08:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.674 09:08:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.674 09:08:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:17:44.674 09:08:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:17:44.674 09:08:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.674 09:08:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.674 09:08:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.674 09:08:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.674 09:08:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.674 09:08:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.674 09:08:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.674 09:08:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.674 09:08:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.674 09:08:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.674 09:08:21 -- paths/export.sh@5 -- # export PATH 00:17:44.674 09:08:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.674 09:08:21 -- nvmf/common.sh@46 -- # : 0 00:17:44.674 09:08:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:44.674 09:08:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:44.674 09:08:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:44.674 09:08:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.674 09:08:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.674 09:08:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:44.674 09:08:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:44.674 09:08:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:44.674 09:08:21 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.674 09:08:21 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.674 09:08:21 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.674 09:08:21 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:44.674 09:08:21 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.674 09:08:21 -- host/timeout.sh@19 -- # nvmftestinit 00:17:44.674 09:08:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:44.674 09:08:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.674 09:08:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:44.674 09:08:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:44.674 09:08:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:44.674 09:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.674 09:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.674 09:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.674 09:08:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:44.674 09:08:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:44.674 09:08:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.674 09:08:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.674 09:08:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:44.674 09:08:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:44.674 09:08:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.674 09:08:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.674 09:08:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.674 09:08:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.674 09:08:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.674 09:08:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.674 09:08:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.674 09:08:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.674 09:08:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:44.674 09:08:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:44.674 Cannot find device "nvmf_tgt_br" 00:17:44.674 09:08:21 -- nvmf/common.sh@154 -- # true 00:17:44.674 09:08:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.938 Cannot find device "nvmf_tgt_br2" 00:17:44.938 09:08:21 -- nvmf/common.sh@155 -- # true 00:17:44.938 09:08:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:44.938 09:08:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:44.938 Cannot find device "nvmf_tgt_br" 00:17:44.938 09:08:21 -- nvmf/common.sh@157 -- # true 00:17:44.938 09:08:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:44.938 Cannot find device "nvmf_tgt_br2" 00:17:44.938 09:08:21 -- nvmf/common.sh@158 -- # true 00:17:44.938 09:08:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:44.938 09:08:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:44.938 09:08:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.938 09:08:21 -- nvmf/common.sh@161 -- # true 00:17:44.938 09:08:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.938 09:08:21 -- nvmf/common.sh@162 -- # true 00:17:44.938 09:08:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.938 09:08:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.938 09:08:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.938 09:08:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.938 09:08:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.938 09:08:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.938 09:08:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.938 09:08:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.938 09:08:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.938 09:08:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:44.938 09:08:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:44.938 09:08:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:44.938 09:08:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:44.938 09:08:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.938 09:08:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.938 09:08:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.938 09:08:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:44.938 09:08:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:44.938 09:08:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.938 09:08:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.938 09:08:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.938 09:08:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.938 09:08:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.938 09:08:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:45.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:45.197 00:17:45.197 --- 10.0.0.2 ping statistics --- 00:17:45.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.197 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:45.197 09:08:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:45.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:45.197 00:17:45.197 --- 10.0.0.3 ping statistics --- 00:17:45.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.197 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:45.197 09:08:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:45.197 00:17:45.197 --- 10.0.0.1 ping statistics --- 00:17:45.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.197 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:45.197 09:08:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.197 09:08:21 -- nvmf/common.sh@421 -- # return 0 00:17:45.197 09:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.197 09:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.197 09:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:45.197 09:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:45.197 09:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.197 09:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:45.197 09:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:45.197 09:08:21 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:45.197 09:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.197 09:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.197 09:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:45.197 09:08:21 -- nvmf/common.sh@469 -- # nvmfpid=73645 00:17:45.197 09:08:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:45.197 09:08:21 -- nvmf/common.sh@470 -- # waitforlisten 73645 00:17:45.197 09:08:21 -- common/autotest_common.sh@829 -- # '[' -z 73645 ']' 00:17:45.197 09:08:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.197 09:08:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.197 09:08:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.198 09:08:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.198 09:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:45.198 [2024-11-17 09:08:21.956009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.198 [2024-11-17 09:08:21.956109] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.198 [2024-11-17 09:08:22.095789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:45.456 [2024-11-17 09:08:22.147693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.456 [2024-11-17 09:08:22.147815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.456 [2024-11-17 09:08:22.147826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.456 [2024-11-17 09:08:22.147834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.456 [2024-11-17 09:08:22.148017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.456 [2024-11-17 09:08:22.148315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.391 09:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.391 09:08:22 -- common/autotest_common.sh@862 -- # return 0 00:17:46.391 09:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:46.391 09:08:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.391 09:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:46.391 09:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.391 09:08:22 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.391 09:08:22 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:46.391 [2024-11-17 09:08:23.193968] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.391 09:08:23 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:46.650 Malloc0 00:17:46.650 09:08:23 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:46.909 09:08:23 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.168 09:08:23 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.427 [2024-11-17 09:08:24.099331] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.427 09:08:24 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:47.427 09:08:24 -- host/timeout.sh@32 -- # bdevperf_pid=73700 00:17:47.427 09:08:24 -- host/timeout.sh@34 -- # waitforlisten 73700 /var/tmp/bdevperf.sock 00:17:47.427 09:08:24 -- common/autotest_common.sh@829 -- # '[' -z 73700 ']' 00:17:47.427 09:08:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.427 09:08:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.427 09:08:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.427 09:08:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.427 09:08:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.427 [2024-11-17 09:08:24.153864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:47.427 [2024-11-17 09:08:24.153954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73700 ] 00:17:47.427 [2024-11-17 09:08:24.292503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.686 [2024-11-17 09:08:24.361809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.253 09:08:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.253 09:08:25 -- common/autotest_common.sh@862 -- # return 0 00:17:48.253 09:08:25 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:48.512 09:08:25 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:49.079 NVMe0n1 00:17:49.080 09:08:25 -- host/timeout.sh@51 -- # rpc_pid=73718 00:17:49.080 09:08:25 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:49.080 09:08:25 -- host/timeout.sh@53 -- # sleep 1 00:17:49.080 Running I/O for 10 seconds... 00:17:50.017 09:08:26 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.278 [2024-11-17 09:08:27.007959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.278 [2024-11-17 09:08:27.008233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.279 [2024-11-17 09:08:27.008240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb20480 is same with the state(5) to be set 00:17:50.279 [2024-11-17 09:08:27.008296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.008955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.008987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.008996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.279 [2024-11-17 09:08:27.009139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.279 [2024-11-17 09:08:27.009159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.279 [2024-11-17 09:08:27.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.280 [2024-11-17 09:08:27.009969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.280 [2024-11-17 09:08:27.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.280 [2024-11-17 09:08:27.010001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-11-17 09:08:27.010761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.281 [2024-11-17 09:08:27.010802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.281 [2024-11-17 09:08:27.010813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.282 [2024-11-17 09:08:27.010822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.010986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.010997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.011006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.282 [2024-11-17 09:08:27.011027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10830c0 is same with the state(5) to be set 00:17:50.282 [2024-11-17 09:08:27.011051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:50.282 [2024-11-17 09:08:27.011059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:50.282 [2024-11-17 09:08:27.011067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123816 len:8 PRP1 0x0 PRP2 0x0 00:17:50.282 [2024-11-17 09:08:27.011076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011119] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10830c0 was disconnected and freed. reset controller. 00:17:50.282 [2024-11-17 09:08:27.011200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.282 [2024-11-17 09:08:27.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.282 [2024-11-17 09:08:27.011240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.282 [2024-11-17 09:08:27.011259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.282 [2024-11-17 09:08:27.011278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.282 [2024-11-17 09:08:27.011287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020010 is same with the state(5) to be set 00:17:50.282 [2024-11-17 09:08:27.011508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.282 [2024-11-17 09:08:27.011541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020010 (9): Bad file descriptor 00:17:50.282 [2024-11-17 09:08:27.011656] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.282 [2024-11-17 09:08:27.011735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.282 [2024-11-17 09:08:27.011781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.282 [2024-11-17 09:08:27.011801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1020010 with addr=10.0.0.2, port=4420 00:17:50.282 [2024-11-17 09:08:27.011812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020010 is same with the state(5) to be set 00:17:50.282 [2024-11-17 09:08:27.011834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020010 (9): Bad file descriptor 00:17:50.282 [2024-11-17 09:08:27.011865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:50.282 [2024-11-17 09:08:27.011877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:50.282 [2024-11-17 09:08:27.011887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.282 [2024-11-17 09:08:27.011907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:50.282 [2024-11-17 09:08:27.011919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.282 09:08:27 -- host/timeout.sh@56 -- # sleep 2 00:17:52.183 [2024-11-17 09:08:29.012030] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.183 [2024-11-17 09:08:29.012156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.183 [2024-11-17 09:08:29.012200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.183 [2024-11-17 09:08:29.012216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1020010 with addr=10.0.0.2, port=4420 00:17:52.183 [2024-11-17 09:08:29.012229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020010 is same with the state(5) to be set 00:17:52.183 [2024-11-17 09:08:29.012256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020010 (9): Bad file descriptor 00:17:52.183 [2024-11-17 09:08:29.012276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.183 [2024-11-17 09:08:29.012285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:52.183 [2024-11-17 09:08:29.012296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.183 [2024-11-17 09:08:29.012321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:52.183 [2024-11-17 09:08:29.012333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:52.183 09:08:29 -- host/timeout.sh@57 -- # get_controller 00:17:52.183 09:08:29 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:52.183 09:08:29 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:52.443 09:08:29 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:52.443 09:08:29 -- host/timeout.sh@58 -- # get_bdev 00:17:52.443 09:08:29 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:52.443 09:08:29 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:52.701 09:08:29 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:52.701 09:08:29 -- host/timeout.sh@61 -- # sleep 5 00:17:54.604 [2024-11-17 09:08:31.012455] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.604 [2024-11-17 09:08:31.012580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.604 [2024-11-17 09:08:31.012639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.604 [2024-11-17 09:08:31.012658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1020010 with addr=10.0.0.2, port=4420 00:17:54.604 [2024-11-17 09:08:31.012671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020010 is same with the state(5) to be set 00:17:54.604 [2024-11-17 09:08:31.012697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020010 (9): Bad file descriptor 00:17:54.604 [2024-11-17 09:08:31.012716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.604 [2024-11-17 09:08:31.012726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:54.604 [2024-11-17 09:08:31.012737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.604 [2024-11-17 09:08:31.012763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:54.604 [2024-11-17 09:08:31.012775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.508 [2024-11-17 09:08:33.012834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.508 [2024-11-17 09:08:33.012914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.508 [2024-11-17 09:08:33.012942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:56.508 [2024-11-17 09:08:33.012954] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:56.508 [2024-11-17 09:08:33.012979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.445 00:17:57.445 Latency(us) 00:17:57.445 [2024-11-17T09:08:34.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.445 [2024-11-17T09:08:34.375Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:57.445 Verification LBA range: start 0x0 length 0x4000 00:17:57.445 NVMe0n1 : 8.13 1896.80 7.41 15.75 0.00 66837.39 3053.38 7015926.69 00:17:57.445 [2024-11-17T09:08:34.375Z] =================================================================================================================== 00:17:57.445 [2024-11-17T09:08:34.375Z] Total : 1896.80 7.41 15.75 0.00 66837.39 3053.38 7015926.69 00:17:57.445 0 00:17:57.713 09:08:34 -- host/timeout.sh@62 -- # get_controller 00:17:57.713 09:08:34 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.713 09:08:34 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:57.984 09:08:34 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:57.984 09:08:34 -- host/timeout.sh@63 -- # get_bdev 00:17:57.984 09:08:34 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:57.984 09:08:34 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:58.242 09:08:35 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:58.242 09:08:35 -- host/timeout.sh@65 -- # wait 73718 00:17:58.242 09:08:35 -- host/timeout.sh@67 -- # killprocess 73700 00:17:58.242 09:08:35 -- common/autotest_common.sh@936 -- # '[' -z 73700 ']' 00:17:58.242 09:08:35 -- common/autotest_common.sh@940 -- # kill -0 73700 00:17:58.242 09:08:35 -- common/autotest_common.sh@941 -- # uname 00:17:58.242 09:08:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.242 09:08:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73700 00:17:58.242 killing process with pid 73700 00:17:58.242 Received shutdown signal, test time was about 9.154396 seconds 00:17:58.242 00:17:58.242 Latency(us) 00:17:58.242 [2024-11-17T09:08:35.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.242 [2024-11-17T09:08:35.172Z] =================================================================================================================== 00:17:58.242 [2024-11-17T09:08:35.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.242 09:08:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:58.242 09:08:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:58.242 09:08:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73700' 00:17:58.242 09:08:35 -- common/autotest_common.sh@955 -- # kill 73700 00:17:58.242 09:08:35 -- common/autotest_common.sh@960 -- # wait 73700 00:17:58.500 09:08:35 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.759 [2024-11-17 09:08:35.472359] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.759 09:08:35 -- host/timeout.sh@74 -- # bdevperf_pid=73841 00:17:58.759 09:08:35 -- host/timeout.sh@76 -- # waitforlisten 73841 /var/tmp/bdevperf.sock 00:17:58.759 09:08:35 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:58.759 09:08:35 -- common/autotest_common.sh@829 -- # '[' -z 73841 ']' 00:17:58.759 09:08:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.759 09:08:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.759 09:08:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.759 09:08:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.759 09:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:58.759 [2024-11-17 09:08:35.554008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:58.759 [2024-11-17 09:08:35.554416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73841 ] 00:17:59.017 [2024-11-17 09:08:35.692900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.017 [2024-11-17 09:08:35.748179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.950 09:08:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.950 09:08:36 -- common/autotest_common.sh@862 -- # return 0 00:17:59.950 09:08:36 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:59.950 09:08:36 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:00.516 NVMe0n1 00:18:00.516 09:08:37 -- host/timeout.sh@84 -- # rpc_pid=73864 00:18:00.516 09:08:37 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.516 09:08:37 -- host/timeout.sh@86 -- # sleep 1 00:18:00.516 Running I/O for 10 seconds... 00:18:01.453 09:08:38 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.714 [2024-11-17 09:08:38.431065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9807b0 is same with the state(5) to be set 00:18:01.714 [2024-11-17 09:08:38.431506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.714 [2024-11-17 09:08:38.431534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.714 [2024-11-17 09:08:38.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.714 [2024-11-17 09:08:38.431564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.714 [2024-11-17 09:08:38.431575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.714 [2024-11-17 09:08:38.431585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.431984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.431994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.715 [2024-11-17 09:08:38.432092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.715 [2024-11-17 09:08:38.432188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.715 [2024-11-17 09:08:38.432225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.715 [2024-11-17 09:08:38.432245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.715 [2024-11-17 09:08:38.432263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.715 [2024-11-17 09:08:38.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.715 [2024-11-17 09:08:38.432281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.716 [2024-11-17 09:08:38.432867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.716 [2024-11-17 09:08:38.432984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.716 [2024-11-17 09:08:38.432995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.717 [2024-11-17 09:08:38.433451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.717 [2024-11-17 09:08:38.433652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.717 [2024-11-17 09:08:38.433663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.433974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.433989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.433999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.434020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.434071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.434106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.718 [2024-11-17 09:08:38.434125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.434144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.434165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.434184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.718 [2024-11-17 09:08:38.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c0c0 is same with the state(5) to be set 00:18:01.718 [2024-11-17 09:08:38.434224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:01.718 [2024-11-17 09:08:38.434231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:01.718 [2024-11-17 09:08:38.434240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128344 len:8 PRP1 0x0 PRP2 0x0 00:18:01.718 [2024-11-17 09:08:38.434248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434289] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198c0c0 was disconnected and freed. reset controller. 00:18:01.718 [2024-11-17 09:08:38.434364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.718 [2024-11-17 09:08:38.434380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.718 [2024-11-17 09:08:38.434399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.718 [2024-11-17 09:08:38.434416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.718 [2024-11-17 09:08:38.434434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.718 [2024-11-17 09:08:38.434445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:01.718 [2024-11-17 09:08:38.434674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:01.718 [2024-11-17 09:08:38.434712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:01.718 [2024-11-17 09:08:38.434805] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.718 [2024-11-17 09:08:38.434867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.718 [2024-11-17 09:08:38.434911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.718 [2024-11-17 09:08:38.434959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:01.718 [2024-11-17 09:08:38.434970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:01.718 [2024-11-17 09:08:38.434989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:01.718 [2024-11-17 09:08:38.435005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:01.718 [2024-11-17 09:08:38.435015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:01.718 09:08:38 -- host/timeout.sh@90 -- # sleep 1 00:18:01.718 [2024-11-17 09:08:38.450179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:01.718 [2024-11-17 09:08:38.450244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:01.718 [2024-11-17 09:08:38.450266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.653 [2024-11-17 09:08:39.450400] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.653 [2024-11-17 09:08:39.450502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.653 [2024-11-17 09:08:39.450545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.653 [2024-11-17 09:08:39.450561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:02.653 [2024-11-17 09:08:39.450573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:02.653 [2024-11-17 09:08:39.450599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:02.653 [2024-11-17 09:08:39.450675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.653 [2024-11-17 09:08:39.450704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:02.653 [2024-11-17 09:08:39.450715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.653 [2024-11-17 09:08:39.450742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:02.653 [2024-11-17 09:08:39.450754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.653 09:08:39 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.912 [2024-11-17 09:08:39.697570] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.912 09:08:39 -- host/timeout.sh@92 -- # wait 73864 00:18:03.847 [2024-11-17 09:08:40.461730] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.415 00:18:10.415 Latency(us) 00:18:10.415 [2024-11-17T09:08:47.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.415 [2024-11-17T09:08:47.345Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.415 Verification LBA range: start 0x0 length 0x4000 00:18:10.415 NVMe0n1 : 10.01 9925.50 38.77 0.00 0.00 12876.05 882.50 3019898.88 00:18:10.415 [2024-11-17T09:08:47.345Z] =================================================================================================================== 00:18:10.415 [2024-11-17T09:08:47.345Z] Total : 9925.50 38.77 0.00 0.00 12876.05 882.50 3019898.88 00:18:10.415 0 00:18:10.415 09:08:47 -- host/timeout.sh@97 -- # rpc_pid=73973 00:18:10.415 09:08:47 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:10.415 09:08:47 -- host/timeout.sh@98 -- # sleep 1 00:18:10.674 Running I/O for 10 seconds... 00:18:11.611 09:08:48 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.873 [2024-11-17 09:08:48.567368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with t[2024-11-17 09:08:48.567380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:18:11.873 id:0 cdw10:00000000 cdw11:00000000 00:18:11.873 [2024-11-17 09:08:48.567434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.873 [2024-11-17 09:08:48.567470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.873 [2024-11-17 09:08:48.567485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 09:08:48.567493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 he state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.873 [2024-11-17 09:08:48.567511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f4a0 is same with the state(5) to be set 00:18:11.873 [2024-11-17 09:08:48.567807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.873 [2024-11-17 09:08:48.568247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.873 [2024-11-17 09:08:48.568258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.874 [2024-11-17 09:08:48.568912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.568995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.569007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.569028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.569037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.569058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.569068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.569079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.874 [2024-11-17 09:08:48.569088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.874 [2024-11-17 09:08:48.569100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.875 [2024-11-17 09:08:48.569934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.875 [2024-11-17 09:08:48.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.875 [2024-11-17 09:08:48.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.569966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.569975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.569986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.876 [2024-11-17 09:08:48.570382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.876 [2024-11-17 09:08:48.570575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199fcc0 is same with the state(5) to be set 00:18:11.876 [2024-11-17 09:08:48.570607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.876 [2024-11-17 09:08:48.570617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.876 [2024-11-17 09:08:48.570626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124968 len:8 PRP1 0x0 PRP2 0x0 00:18:11.876 [2024-11-17 09:08:48.570635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.876 [2024-11-17 09:08:48.570679] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x199fcc0 was disconnected and freed. reset controller. 00:18:11.876 [2024-11-17 09:08:48.570926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.876 [2024-11-17 09:08:48.570961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:11.876 [2024-11-17 09:08:48.571058] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.876 [2024-11-17 09:08:48.571113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.876 [2024-11-17 09:08:48.571165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.876 [2024-11-17 09:08:48.571182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:11.876 [2024-11-17 09:08:48.571193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:11.876 [2024-11-17 09:08:48.571211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:11.876 [2024-11-17 09:08:48.571228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:11.876 [2024-11-17 09:08:48.571238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:11.876 [2024-11-17 09:08:48.571248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:11.876 [2024-11-17 09:08:48.571268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.876 [2024-11-17 09:08:48.571279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.876 09:08:48 -- host/timeout.sh@101 -- # sleep 3 00:18:12.814 [2024-11-17 09:08:49.571381] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.814 [2024-11-17 09:08:49.571498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.814 [2024-11-17 09:08:49.571543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.814 [2024-11-17 09:08:49.571560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:12.814 [2024-11-17 09:08:49.571573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:12.814 [2024-11-17 09:08:49.571596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:12.814 [2024-11-17 09:08:49.571627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.814 [2024-11-17 09:08:49.571639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:12.814 [2024-11-17 09:08:49.571649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.814 [2024-11-17 09:08:49.571673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.814 [2024-11-17 09:08:49.571684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.752 [2024-11-17 09:08:50.571791] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.752 [2024-11-17 09:08:50.571906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.752 [2024-11-17 09:08:50.571950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.752 [2024-11-17 09:08:50.571980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:13.752 [2024-11-17 09:08:50.571992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:13.752 [2024-11-17 09:08:50.572017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:13.752 [2024-11-17 09:08:50.572034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.752 [2024-11-17 09:08:50.572044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:13.752 [2024-11-17 09:08:50.572054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.752 [2024-11-17 09:08:50.572078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:13.752 [2024-11-17 09:08:50.572088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.689 [2024-11-17 09:08:51.574292] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.689 [2024-11-17 09:08:51.574382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.689 [2024-11-17 09:08:51.574438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.689 [2024-11-17 09:08:51.574455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929010 with addr=10.0.0.2, port=4420 00:18:14.689 [2024-11-17 09:08:51.574468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929010 is same with the state(5) to be set 00:18:14.689 [2024-11-17 09:08:51.574634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929010 (9): Bad file descriptor 00:18:14.689 [2024-11-17 09:08:51.574823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.689 [2024-11-17 09:08:51.574845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.689 [2024-11-17 09:08:51.574857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.689 [2024-11-17 09:08:51.577256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.689 [2024-11-17 09:08:51.577315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.689 09:08:51 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.256 [2024-11-17 09:08:51.911466] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.256 09:08:51 -- host/timeout.sh@103 -- # wait 73973 00:18:15.823 [2024-11-17 09:08:52.607861] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:21.093 00:18:21.093 Latency(us) 00:18:21.093 [2024-11-17T09:08:58.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.093 [2024-11-17T09:08:58.023Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.093 Verification LBA range: start 0x0 length 0x4000 00:18:21.093 NVMe0n1 : 10.01 8318.59 32.49 6099.10 0.00 8862.30 565.99 3019898.88 00:18:21.093 [2024-11-17T09:08:58.023Z] =================================================================================================================== 00:18:21.093 [2024-11-17T09:08:58.023Z] Total : 8318.59 32.49 6099.10 0.00 8862.30 0.00 3019898.88 00:18:21.093 0 00:18:21.093 09:08:57 -- host/timeout.sh@105 -- # killprocess 73841 00:18:21.093 09:08:57 -- common/autotest_common.sh@936 -- # '[' -z 73841 ']' 00:18:21.093 09:08:57 -- common/autotest_common.sh@940 -- # kill -0 73841 00:18:21.093 09:08:57 -- common/autotest_common.sh@941 -- # uname 00:18:21.093 09:08:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.093 09:08:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73841 00:18:21.093 killing process with pid 73841 00:18:21.093 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.093 00:18:21.093 Latency(us) 00:18:21.093 [2024-11-17T09:08:58.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.093 [2024-11-17T09:08:58.023Z] =================================================================================================================== 00:18:21.093 [2024-11-17T09:08:58.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.093 09:08:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.093 09:08:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.093 09:08:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73841' 00:18:21.093 09:08:57 -- common/autotest_common.sh@955 -- # kill 73841 00:18:21.093 09:08:57 -- common/autotest_common.sh@960 -- # wait 73841 00:18:21.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.093 09:08:57 -- host/timeout.sh@110 -- # bdevperf_pid=74083 00:18:21.093 09:08:57 -- host/timeout.sh@112 -- # waitforlisten 74083 /var/tmp/bdevperf.sock 00:18:21.093 09:08:57 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:21.093 09:08:57 -- common/autotest_common.sh@829 -- # '[' -z 74083 ']' 00:18:21.093 09:08:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.093 09:08:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.093 09:08:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.093 09:08:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.093 09:08:57 -- common/autotest_common.sh@10 -- # set +x 00:18:21.093 [2024-11-17 09:08:57.714912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:21.093 [2024-11-17 09:08:57.715022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74083 ] 00:18:21.093 [2024-11-17 09:08:57.850957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.093 [2024-11-17 09:08:57.905959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.029 09:08:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.029 09:08:58 -- common/autotest_common.sh@862 -- # return 0 00:18:22.029 09:08:58 -- host/timeout.sh@116 -- # dtrace_pid=74104 00:18:22.029 09:08:58 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 74083 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:22.030 09:08:58 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:22.030 09:08:58 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:22.596 NVMe0n1 00:18:22.596 09:08:59 -- host/timeout.sh@124 -- # rpc_pid=74146 00:18:22.596 09:08:59 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.596 09:08:59 -- host/timeout.sh@125 -- # sleep 1 00:18:22.596 Running I/O for 10 seconds... 00:18:23.533 09:09:00 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.795 [2024-11-17 09:09:00.460703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.795 [2024-11-17 09:09:00.461253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.461384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.795 [2024-11-17 09:09:00.461511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.461593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.795 [2024-11-17 09:09:00.461705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.461797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.795 [2024-11-17 09:09:00.461876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.461947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397010 is same with the state(5) to be set 00:18:23.795 [2024-11-17 09:09:00.462274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.795 [2024-11-17 09:09:00.462946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.795 [2024-11-17 09:09:00.462955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.462966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.462975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.462986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.462995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.796 [2024-11-17 09:09:00.463743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.796 [2024-11-17 09:09:00.463755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.463987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.463999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.797 [2024-11-17 09:09:00.464480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.797 [2024-11-17 09:09:00.464491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.464980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.464990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.798 [2024-11-17 09:09:00.465278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.798 [2024-11-17 09:09:00.465289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fa0c0 is same with the state(5) to be set 00:18:23.798 [2024-11-17 09:09:00.465302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.798 [2024-11-17 09:09:00.465309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.798 [2024-11-17 09:09:00.465318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:18:23.799 [2024-11-17 09:09:00.465329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.799 [2024-11-17 09:09:00.465371] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13fa0c0 was disconnected and freed. reset controller. 00:18:23.799 [2024-11-17 09:09:00.465671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.799 [2024-11-17 09:09:00.465715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1397010 (9): Bad file descriptor 00:18:23.799 [2024-11-17 09:09:00.465819] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.799 [2024-11-17 09:09:00.465886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.799 [2024-11-17 09:09:00.465932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.799 [2024-11-17 09:09:00.465949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1397010 with addr=10.0.0.2, port=4420 00:18:23.799 [2024-11-17 09:09:00.465960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397010 is same with the state(5) to be set 00:18:23.799 [2024-11-17 09:09:00.465979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1397010 (9): Bad file descriptor 00:18:23.799 [2024-11-17 09:09:00.465996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:23.799 [2024-11-17 09:09:00.466006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:23.799 [2024-11-17 09:09:00.466015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:23.799 [2024-11-17 09:09:00.466035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:23.799 [2024-11-17 09:09:00.466045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.799 09:09:00 -- host/timeout.sh@128 -- # wait 74146 00:18:25.723 [2024-11-17 09:09:02.466241] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.723 [2024-11-17 09:09:02.466334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.723 [2024-11-17 09:09:02.466382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.723 [2024-11-17 09:09:02.466401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1397010 with addr=10.0.0.2, port=4420 00:18:25.723 [2024-11-17 09:09:02.466414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397010 is same with the state(5) to be set 00:18:25.723 [2024-11-17 09:09:02.466441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1397010 (9): Bad file descriptor 00:18:25.723 [2024-11-17 09:09:02.466473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:25.723 [2024-11-17 09:09:02.466485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:25.723 [2024-11-17 09:09:02.466496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:25.723 [2024-11-17 09:09:02.466521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:25.723 [2024-11-17 09:09:02.466532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.629 [2024-11-17 09:09:04.466730] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.629 [2024-11-17 09:09:04.466848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.629 [2024-11-17 09:09:04.466895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.629 [2024-11-17 09:09:04.466912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1397010 with addr=10.0.0.2, port=4420 00:18:27.629 [2024-11-17 09:09:04.466925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397010 is same with the state(5) to be set 00:18:27.629 [2024-11-17 09:09:04.466952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1397010 (9): Bad file descriptor 00:18:27.629 [2024-11-17 09:09:04.466971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:27.629 [2024-11-17 09:09:04.466980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:27.629 [2024-11-17 09:09:04.466991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:27.629 [2024-11-17 09:09:04.467017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:27.629 [2024-11-17 09:09:04.467029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.164 [2024-11-17 09:09:06.467093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.164 [2024-11-17 09:09:06.467162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.164 [2024-11-17 09:09:06.467190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.164 [2024-11-17 09:09:06.467199] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:30.164 [2024-11-17 09:09:06.467224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.732 00:18:30.732 Latency(us) 00:18:30.732 [2024-11-17T09:09:07.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.732 [2024-11-17T09:09:07.662Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:30.732 NVMe0n1 : 8.04 2068.16 8.08 15.91 0.00 61356.51 7089.80 7015926.69 00:18:30.732 [2024-11-17T09:09:07.662Z] =================================================================================================================== 00:18:30.732 [2024-11-17T09:09:07.662Z] Total : 2068.16 8.08 15.91 0.00 61356.51 7089.80 7015926.69 00:18:30.732 0 00:18:30.732 09:09:07 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.732 Attaching 5 probes... 00:18:30.732 1256.169569: reset bdev controller NVMe0 00:18:30.732 1256.260876: reconnect bdev controller NVMe0 00:18:30.732 3256.614522: reconnect delay bdev controller NVMe0 00:18:30.732 3256.651561: reconnect bdev controller NVMe0 00:18:30.732 5257.073044: reconnect delay bdev controller NVMe0 00:18:30.732 5257.110467: reconnect bdev controller NVMe0 00:18:30.732 7257.562540: reconnect delay bdev controller NVMe0 00:18:30.732 7257.581016: reconnect bdev controller NVMe0 00:18:30.732 09:09:07 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:30.732 09:09:07 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:30.732 09:09:07 -- host/timeout.sh@136 -- # kill 74104 00:18:30.732 09:09:07 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.732 09:09:07 -- host/timeout.sh@139 -- # killprocess 74083 00:18:30.732 09:09:07 -- common/autotest_common.sh@936 -- # '[' -z 74083 ']' 00:18:30.732 09:09:07 -- common/autotest_common.sh@940 -- # kill -0 74083 00:18:30.732 09:09:07 -- common/autotest_common.sh@941 -- # uname 00:18:30.732 09:09:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.732 09:09:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74083 00:18:30.732 killing process with pid 74083 00:18:30.732 Received shutdown signal, test time was about 8.111990 seconds 00:18:30.732 00:18:30.732 Latency(us) 00:18:30.732 [2024-11-17T09:09:07.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.732 [2024-11-17T09:09:07.662Z] =================================================================================================================== 00:18:30.732 [2024-11-17T09:09:07.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.732 09:09:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:30.732 09:09:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:30.732 09:09:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74083' 00:18:30.732 09:09:07 -- common/autotest_common.sh@955 -- # kill 74083 00:18:30.732 09:09:07 -- common/autotest_common.sh@960 -- # wait 74083 00:18:30.991 09:09:07 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.250 09:09:08 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:31.250 09:09:08 -- host/timeout.sh@145 -- # nvmftestfini 00:18:31.250 09:09:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:31.250 09:09:08 -- nvmf/common.sh@116 -- # sync 00:18:31.250 09:09:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:31.250 09:09:08 -- nvmf/common.sh@119 -- # set +e 00:18:31.250 09:09:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:31.250 09:09:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:31.250 rmmod nvme_tcp 00:18:31.250 rmmod nvme_fabrics 00:18:31.250 rmmod nvme_keyring 00:18:31.250 09:09:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:31.250 09:09:08 -- nvmf/common.sh@123 -- # set -e 00:18:31.250 09:09:08 -- nvmf/common.sh@124 -- # return 0 00:18:31.250 09:09:08 -- nvmf/common.sh@477 -- # '[' -n 73645 ']' 00:18:31.250 09:09:08 -- nvmf/common.sh@478 -- # killprocess 73645 00:18:31.250 09:09:08 -- common/autotest_common.sh@936 -- # '[' -z 73645 ']' 00:18:31.250 09:09:08 -- common/autotest_common.sh@940 -- # kill -0 73645 00:18:31.250 09:09:08 -- common/autotest_common.sh@941 -- # uname 00:18:31.250 09:09:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.250 09:09:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73645 00:18:31.250 killing process with pid 73645 00:18:31.250 09:09:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.250 09:09:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.250 09:09:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73645' 00:18:31.250 09:09:08 -- common/autotest_common.sh@955 -- # kill 73645 00:18:31.250 09:09:08 -- common/autotest_common.sh@960 -- # wait 73645 00:18:31.509 09:09:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:31.509 09:09:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:31.509 09:09:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:31.509 09:09:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.509 09:09:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:31.509 09:09:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.509 09:09:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.509 09:09:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.509 09:09:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:31.509 ************************************ 00:18:31.509 END TEST nvmf_timeout 00:18:31.509 ************************************ 00:18:31.509 00:18:31.509 real 0m47.037s 00:18:31.509 user 2m18.837s 00:18:31.509 sys 0m5.205s 00:18:31.509 09:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.509 09:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:31.767 09:09:08 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:31.768 09:09:08 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:31.768 09:09:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.768 09:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 09:09:08 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:31.768 00:18:31.768 real 10m35.439s 00:18:31.768 user 29m32.939s 00:18:31.768 sys 3m22.176s 00:18:31.768 09:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.768 09:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 ************************************ 00:18:31.768 END TEST nvmf_tcp 00:18:31.768 ************************************ 00:18:31.768 09:09:08 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:31.768 09:09:08 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:31.768 09:09:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:31.768 09:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.768 09:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 ************************************ 00:18:31.768 START TEST nvmf_dif 00:18:31.768 ************************************ 00:18:31.768 09:09:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:31.768 * Looking for test storage... 00:18:31.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:31.768 09:09:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:31.768 09:09:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:31.768 09:09:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:31.768 09:09:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:31.768 09:09:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:31.768 09:09:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:31.768 09:09:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:31.768 09:09:08 -- scripts/common.sh@335 -- # IFS=.-: 00:18:31.768 09:09:08 -- scripts/common.sh@335 -- # read -ra ver1 00:18:31.768 09:09:08 -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.768 09:09:08 -- scripts/common.sh@336 -- # read -ra ver2 00:18:31.768 09:09:08 -- scripts/common.sh@337 -- # local 'op=<' 00:18:31.768 09:09:08 -- scripts/common.sh@339 -- # ver1_l=2 00:18:31.768 09:09:08 -- scripts/common.sh@340 -- # ver2_l=1 00:18:31.768 09:09:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:31.768 09:09:08 -- scripts/common.sh@343 -- # case "$op" in 00:18:31.768 09:09:08 -- scripts/common.sh@344 -- # : 1 00:18:31.768 09:09:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:31.768 09:09:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.768 09:09:08 -- scripts/common.sh@364 -- # decimal 1 00:18:31.768 09:09:08 -- scripts/common.sh@352 -- # local d=1 00:18:32.027 09:09:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.027 09:09:08 -- scripts/common.sh@354 -- # echo 1 00:18:32.027 09:09:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:32.027 09:09:08 -- scripts/common.sh@365 -- # decimal 2 00:18:32.027 09:09:08 -- scripts/common.sh@352 -- # local d=2 00:18:32.027 09:09:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.027 09:09:08 -- scripts/common.sh@354 -- # echo 2 00:18:32.027 09:09:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:32.027 09:09:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:32.027 09:09:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:32.027 09:09:08 -- scripts/common.sh@367 -- # return 0 00:18:32.027 09:09:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.027 09:09:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:32.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.027 --rc genhtml_branch_coverage=1 00:18:32.027 --rc genhtml_function_coverage=1 00:18:32.027 --rc genhtml_legend=1 00:18:32.027 --rc geninfo_all_blocks=1 00:18:32.027 --rc geninfo_unexecuted_blocks=1 00:18:32.027 00:18:32.027 ' 00:18:32.027 09:09:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:32.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.027 --rc genhtml_branch_coverage=1 00:18:32.027 --rc genhtml_function_coverage=1 00:18:32.027 --rc genhtml_legend=1 00:18:32.027 --rc geninfo_all_blocks=1 00:18:32.027 --rc geninfo_unexecuted_blocks=1 00:18:32.027 00:18:32.027 ' 00:18:32.027 09:09:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:32.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.027 --rc genhtml_branch_coverage=1 00:18:32.027 --rc genhtml_function_coverage=1 00:18:32.027 --rc genhtml_legend=1 00:18:32.027 --rc geninfo_all_blocks=1 00:18:32.027 --rc geninfo_unexecuted_blocks=1 00:18:32.027 00:18:32.027 ' 00:18:32.027 09:09:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:32.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.027 --rc genhtml_branch_coverage=1 00:18:32.027 --rc genhtml_function_coverage=1 00:18:32.027 --rc genhtml_legend=1 00:18:32.027 --rc geninfo_all_blocks=1 00:18:32.027 --rc geninfo_unexecuted_blocks=1 00:18:32.027 00:18:32.027 ' 00:18:32.027 09:09:08 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:32.027 09:09:08 -- nvmf/common.sh@7 -- # uname -s 00:18:32.027 09:09:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.027 09:09:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.027 09:09:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.027 09:09:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.027 09:09:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.027 09:09:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.027 09:09:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.027 09:09:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.027 09:09:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.027 09:09:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.027 09:09:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:18:32.027 09:09:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:18:32.027 09:09:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.027 09:09:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.027 09:09:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:32.027 09:09:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:32.027 09:09:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.027 09:09:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.027 09:09:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.027 09:09:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.027 09:09:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.027 09:09:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.027 09:09:08 -- paths/export.sh@5 -- # export PATH 00:18:32.027 09:09:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.027 09:09:08 -- nvmf/common.sh@46 -- # : 0 00:18:32.027 09:09:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.027 09:09:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.027 09:09:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.027 09:09:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.027 09:09:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.027 09:09:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.027 09:09:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.027 09:09:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.027 09:09:08 -- target/dif.sh@15 -- # NULL_META=16 00:18:32.027 09:09:08 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:32.027 09:09:08 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:32.027 09:09:08 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:32.027 09:09:08 -- target/dif.sh@135 -- # nvmftestinit 00:18:32.027 09:09:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.027 09:09:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.027 09:09:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.027 09:09:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.027 09:09:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.027 09:09:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.027 09:09:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:32.027 09:09:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.027 09:09:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:32.028 09:09:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:32.028 09:09:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:32.028 09:09:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:32.028 09:09:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:32.028 09:09:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:32.028 09:09:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.028 09:09:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.028 09:09:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:32.028 09:09:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:32.028 09:09:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:32.028 09:09:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:32.028 09:09:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:32.028 09:09:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.028 09:09:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:32.028 09:09:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:32.028 09:09:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:32.028 09:09:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:32.028 09:09:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:32.028 09:09:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:32.028 Cannot find device "nvmf_tgt_br" 00:18:32.028 09:09:08 -- nvmf/common.sh@154 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:32.028 Cannot find device "nvmf_tgt_br2" 00:18:32.028 09:09:08 -- nvmf/common.sh@155 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:32.028 09:09:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:32.028 Cannot find device "nvmf_tgt_br" 00:18:32.028 09:09:08 -- nvmf/common.sh@157 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:32.028 Cannot find device "nvmf_tgt_br2" 00:18:32.028 09:09:08 -- nvmf/common.sh@158 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:32.028 09:09:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:32.028 09:09:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:32.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.028 09:09:08 -- nvmf/common.sh@161 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:32.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.028 09:09:08 -- nvmf/common.sh@162 -- # true 00:18:32.028 09:09:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:32.028 09:09:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.028 09:09:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.028 09:09:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.028 09:09:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.028 09:09:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.028 09:09:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.028 09:09:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:32.028 09:09:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:32.028 09:09:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:32.028 09:09:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:32.028 09:09:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:32.028 09:09:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:32.028 09:09:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.286 09:09:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.286 09:09:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.286 09:09:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:32.286 09:09:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:32.287 09:09:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.287 09:09:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.287 09:09:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.287 09:09:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.287 09:09:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.287 09:09:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:32.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:18:32.287 00:18:32.287 --- 10.0.0.2 ping statistics --- 00:18:32.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.287 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:32.287 09:09:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:32.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:32.287 00:18:32.287 --- 10.0.0.3 ping statistics --- 00:18:32.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.287 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:32.287 09:09:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:32.287 00:18:32.287 --- 10.0.0.1 ping statistics --- 00:18:32.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.287 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:32.287 09:09:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.287 09:09:09 -- nvmf/common.sh@421 -- # return 0 00:18:32.287 09:09:09 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:32.287 09:09:09 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:32.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:32.544 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:32.544 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:32.544 09:09:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.544 09:09:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.544 09:09:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.544 09:09:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.544 09:09:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.544 09:09:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.544 09:09:09 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:32.544 09:09:09 -- target/dif.sh@137 -- # nvmfappstart 00:18:32.544 09:09:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.544 09:09:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.544 09:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:32.544 09:09:09 -- nvmf/common.sh@469 -- # nvmfpid=74591 00:18:32.544 09:09:09 -- nvmf/common.sh@470 -- # waitforlisten 74591 00:18:32.544 09:09:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:32.544 09:09:09 -- common/autotest_common.sh@829 -- # '[' -z 74591 ']' 00:18:32.801 09:09:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.801 09:09:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.801 09:09:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.801 09:09:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.801 09:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:32.801 [2024-11-17 09:09:09.524536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:32.802 [2024-11-17 09:09:09.524673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.802 [2024-11-17 09:09:09.664088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.060 [2024-11-17 09:09:09.731799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.060 [2024-11-17 09:09:09.731960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.060 [2024-11-17 09:09:09.731976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.060 [2024-11-17 09:09:09.731987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.060 [2024-11-17 09:09:09.732023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.628 09:09:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.628 09:09:10 -- common/autotest_common.sh@862 -- # return 0 00:18:33.629 09:09:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.629 09:09:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.629 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 09:09:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.888 09:09:10 -- target/dif.sh@139 -- # create_transport 00:18:33.888 09:09:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:33.888 09:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.888 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 [2024-11-17 09:09:10.589340] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.888 09:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.888 09:09:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:33.888 09:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:33.888 09:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.888 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 ************************************ 00:18:33.888 START TEST fio_dif_1_default 00:18:33.888 ************************************ 00:18:33.888 09:09:10 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:33.888 09:09:10 -- target/dif.sh@86 -- # create_subsystems 0 00:18:33.888 09:09:10 -- target/dif.sh@28 -- # local sub 00:18:33.888 09:09:10 -- target/dif.sh@30 -- # for sub in "$@" 00:18:33.888 09:09:10 -- target/dif.sh@31 -- # create_subsystem 0 00:18:33.888 09:09:10 -- target/dif.sh@18 -- # local sub_id=0 00:18:33.888 09:09:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:33.888 09:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.888 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 bdev_null0 00:18:33.888 09:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.888 09:09:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:33.888 09:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.888 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 09:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.888 09:09:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:33.888 09:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.888 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.888 09:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.889 09:09:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:33.889 09:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.889 09:09:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.889 [2024-11-17 09:09:10.637595] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.889 09:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.889 09:09:10 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:33.889 09:09:10 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:33.889 09:09:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:33.889 09:09:10 -- nvmf/common.sh@520 -- # config=() 00:18:33.889 09:09:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.889 09:09:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:33.889 09:09:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.889 09:09:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.889 { 00:18:33.889 "params": { 00:18:33.889 "name": "Nvme$subsystem", 00:18:33.889 "trtype": "$TEST_TRANSPORT", 00:18:33.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.889 "adrfam": "ipv4", 00:18:33.889 "trsvcid": "$NVMF_PORT", 00:18:33.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.889 "hdgst": ${hdgst:-false}, 00:18:33.889 "ddgst": ${ddgst:-false} 00:18:33.889 }, 00:18:33.889 "method": "bdev_nvme_attach_controller" 00:18:33.889 } 00:18:33.889 EOF 00:18:33.889 )") 00:18:33.889 09:09:10 -- target/dif.sh@82 -- # gen_fio_conf 00:18:33.889 09:09:10 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:33.889 09:09:10 -- target/dif.sh@54 -- # local file 00:18:33.889 09:09:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:33.889 09:09:10 -- target/dif.sh@56 -- # cat 00:18:33.889 09:09:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:33.889 09:09:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:33.889 09:09:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.889 09:09:10 -- nvmf/common.sh@542 -- # cat 00:18:33.889 09:09:10 -- common/autotest_common.sh@1330 -- # shift 00:18:33.889 09:09:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:33.889 09:09:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.889 09:09:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:33.889 09:09:10 -- target/dif.sh@72 -- # (( file <= files )) 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:33.889 09:09:10 -- nvmf/common.sh@544 -- # jq . 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:33.889 09:09:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.889 09:09:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.889 "params": { 00:18:33.889 "name": "Nvme0", 00:18:33.889 "trtype": "tcp", 00:18:33.889 "traddr": "10.0.0.2", 00:18:33.889 "adrfam": "ipv4", 00:18:33.889 "trsvcid": "4420", 00:18:33.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:33.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:33.889 "hdgst": false, 00:18:33.889 "ddgst": false 00:18:33.889 }, 00:18:33.889 "method": "bdev_nvme_attach_controller" 00:18:33.889 }' 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:33.889 09:09:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:33.889 09:09:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:33.889 09:09:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:33.889 09:09:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:33.889 09:09:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:33.889 09:09:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:34.148 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:34.148 fio-3.35 00:18:34.148 Starting 1 thread 00:18:34.407 [2024-11-17 09:09:11.202937] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:34.407 [2024-11-17 09:09:11.203237] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:46.622 00:18:46.622 filename0: (groupid=0, jobs=1): err= 0: pid=74663: Sun Nov 17 09:09:21 2024 00:18:46.622 read: IOPS=9415, BW=36.8MiB/s (38.6MB/s)(368MiB/10001msec) 00:18:46.622 slat (nsec): min=5800, max=59338, avg=8166.93, stdev=3647.65 00:18:46.622 clat (usec): min=308, max=4879, avg=400.83, stdev=56.67 00:18:46.622 lat (usec): min=313, max=4906, avg=409.00, stdev=57.34 00:18:46.622 clat percentiles (usec): 00:18:46.622 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:18:46.622 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 404], 00:18:46.622 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 490], 00:18:46.622 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 635], 99.95th=[ 701], 00:18:46.622 | 99.99th=[ 1483] 00:18:46.622 bw ( KiB/s): min=35736, max=38880, per=100.00%, avg=37672.00, stdev=732.59, samples=19 00:18:46.622 iops : min= 8934, max= 9720, avg=9418.00, stdev=183.15, samples=19 00:18:46.622 lat (usec) : 500=96.60%, 750=3.36%, 1000=0.01% 00:18:46.622 lat (msec) : 2=0.02%, 10=0.01% 00:18:46.622 cpu : usr=85.12%, sys=13.06%, ctx=18, majf=0, minf=9 00:18:46.622 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.622 issued rwts: total=94164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.622 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:46.622 00:18:46.622 Run status group 0 (all jobs): 00:18:46.622 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=368MiB (386MB), run=10001-10001msec 00:18:46.622 09:09:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:46.622 09:09:21 -- target/dif.sh@43 -- # local sub 00:18:46.622 09:09:21 -- target/dif.sh@45 -- # for sub in "$@" 00:18:46.622 09:09:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:46.622 09:09:21 -- target/dif.sh@36 -- # local sub_id=0 00:18:46.622 09:09:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 ************************************ 00:18:46.622 END TEST fio_dif_1_default 00:18:46.622 ************************************ 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 00:18:46.622 real 0m10.903s 00:18:46.622 user 0m9.116s 00:18:46.622 sys 0m1.531s 00:18:46.622 09:09:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:46.622 09:09:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:46.622 09:09:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 ************************************ 00:18:46.622 START TEST fio_dif_1_multi_subsystems 00:18:46.622 ************************************ 00:18:46.622 09:09:21 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:18:46.622 09:09:21 -- target/dif.sh@92 -- # local files=1 00:18:46.622 09:09:21 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:46.622 09:09:21 -- target/dif.sh@28 -- # local sub 00:18:46.622 09:09:21 -- target/dif.sh@30 -- # for sub in "$@" 00:18:46.622 09:09:21 -- target/dif.sh@31 -- # create_subsystem 0 00:18:46.622 09:09:21 -- target/dif.sh@18 -- # local sub_id=0 00:18:46.622 09:09:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 bdev_null0 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 [2024-11-17 09:09:21.598426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@30 -- # for sub in "$@" 00:18:46.622 09:09:21 -- target/dif.sh@31 -- # create_subsystem 1 00:18:46.622 09:09:21 -- target/dif.sh@18 -- # local sub_id=1 00:18:46.622 09:09:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 bdev_null1 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.622 09:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.622 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.622 09:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.622 09:09:21 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:46.622 09:09:21 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:46.622 09:09:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:46.622 09:09:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.622 09:09:21 -- nvmf/common.sh@520 -- # config=() 00:18:46.622 09:09:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.622 09:09:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:46.622 09:09:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:46.622 09:09:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:46.623 09:09:21 -- target/dif.sh@82 -- # gen_fio_conf 00:18:46.623 09:09:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:46.623 09:09:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:46.623 { 00:18:46.623 "params": { 00:18:46.623 "name": "Nvme$subsystem", 00:18:46.623 "trtype": "$TEST_TRANSPORT", 00:18:46.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.623 "adrfam": "ipv4", 00:18:46.623 "trsvcid": "$NVMF_PORT", 00:18:46.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.623 "hdgst": ${hdgst:-false}, 00:18:46.623 "ddgst": ${ddgst:-false} 00:18:46.623 }, 00:18:46.623 "method": "bdev_nvme_attach_controller" 00:18:46.623 } 00:18:46.623 EOF 00:18:46.623 )") 00:18:46.623 09:09:21 -- target/dif.sh@54 -- # local file 00:18:46.623 09:09:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:46.623 09:09:21 -- target/dif.sh@56 -- # cat 00:18:46.623 09:09:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.623 09:09:21 -- common/autotest_common.sh@1330 -- # shift 00:18:46.623 09:09:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:46.623 09:09:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.623 09:09:21 -- nvmf/common.sh@542 -- # cat 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:46.623 09:09:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:46.623 09:09:21 -- target/dif.sh@72 -- # (( file <= files )) 00:18:46.623 09:09:21 -- target/dif.sh@73 -- # cat 00:18:46.623 09:09:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:46.623 09:09:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:46.623 { 00:18:46.623 "params": { 00:18:46.623 "name": "Nvme$subsystem", 00:18:46.623 "trtype": "$TEST_TRANSPORT", 00:18:46.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.623 "adrfam": "ipv4", 00:18:46.623 "trsvcid": "$NVMF_PORT", 00:18:46.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.623 "hdgst": ${hdgst:-false}, 00:18:46.623 "ddgst": ${ddgst:-false} 00:18:46.623 }, 00:18:46.623 "method": "bdev_nvme_attach_controller" 00:18:46.623 } 00:18:46.623 EOF 00:18:46.623 )") 00:18:46.623 09:09:21 -- target/dif.sh@72 -- # (( file++ )) 00:18:46.623 09:09:21 -- target/dif.sh@72 -- # (( file <= files )) 00:18:46.623 09:09:21 -- nvmf/common.sh@542 -- # cat 00:18:46.623 09:09:21 -- nvmf/common.sh@544 -- # jq . 00:18:46.623 09:09:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:46.623 09:09:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:46.623 "params": { 00:18:46.623 "name": "Nvme0", 00:18:46.623 "trtype": "tcp", 00:18:46.623 "traddr": "10.0.0.2", 00:18:46.623 "adrfam": "ipv4", 00:18:46.623 "trsvcid": "4420", 00:18:46.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:46.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:46.623 "hdgst": false, 00:18:46.623 "ddgst": false 00:18:46.623 }, 00:18:46.623 "method": "bdev_nvme_attach_controller" 00:18:46.623 },{ 00:18:46.623 "params": { 00:18:46.623 "name": "Nvme1", 00:18:46.623 "trtype": "tcp", 00:18:46.623 "traddr": "10.0.0.2", 00:18:46.623 "adrfam": "ipv4", 00:18:46.623 "trsvcid": "4420", 00:18:46.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.623 "hdgst": false, 00:18:46.623 "ddgst": false 00:18:46.623 }, 00:18:46.623 "method": "bdev_nvme_attach_controller" 00:18:46.623 }' 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:46.623 09:09:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:46.623 09:09:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:46.623 09:09:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:46.623 09:09:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:46.623 09:09:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:46.623 09:09:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.623 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:46.623 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:46.623 fio-3.35 00:18:46.623 Starting 2 threads 00:18:46.623 [2024-11-17 09:09:22.250167] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:46.623 [2024-11-17 09:09:22.250244] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:56.601 00:18:56.601 filename0: (groupid=0, jobs=1): err= 0: pid=74823: Sun Nov 17 09:09:32 2024 00:18:56.601 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(198MiB/10001msec) 00:18:56.601 slat (nsec): min=6388, max=79649, avg=13111.67, stdev=5036.85 00:18:56.601 clat (usec): min=579, max=2394, avg=755.00, stdev=65.58 00:18:56.601 lat (usec): min=586, max=2433, avg=768.11, stdev=66.20 00:18:56.601 clat percentiles (usec): 00:18:56.601 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 701], 00:18:56.601 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:18:56.601 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:18:56.601 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 1037], 00:18:56.601 | 99.99th=[ 1434] 00:18:56.601 bw ( KiB/s): min=19872, max=20832, per=50.07%, avg=20285.79, stdev=242.71, samples=19 00:18:56.601 iops : min= 4968, max= 5208, avg=5071.42, stdev=60.63, samples=19 00:18:56.601 lat (usec) : 750=51.31%, 1000=48.62% 00:18:56.601 lat (msec) : 2=0.07%, 4=0.01% 00:18:56.601 cpu : usr=90.39%, sys=8.23%, ctx=10, majf=0, minf=9 00:18:56.601 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.601 issued rwts: total=50648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.601 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:56.601 filename1: (groupid=0, jobs=1): err= 0: pid=74824: Sun Nov 17 09:09:32 2024 00:18:56.601 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(198MiB/10001msec) 00:18:56.601 slat (nsec): min=6306, max=79033, avg=13505.55, stdev=5117.65 00:18:56.601 clat (usec): min=605, max=2398, avg=752.31, stdev=63.24 00:18:56.601 lat (usec): min=623, max=2437, avg=765.81, stdev=64.04 00:18:56.601 clat percentiles (usec): 00:18:56.601 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 701], 00:18:56.601 | 30.00th=[ 717], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 766], 00:18:56.601 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 857], 00:18:56.601 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 979], 99.95th=[ 1020], 00:18:56.601 | 99.99th=[ 1418] 00:18:56.601 bw ( KiB/s): min=19872, max=20832, per=50.07%, avg=20285.79, stdev=242.71, samples=19 00:18:56.601 iops : min= 4968, max= 5208, avg=5071.42, stdev=60.63, samples=19 00:18:56.601 lat (usec) : 750=53.53%, 1000=46.41% 00:18:56.601 lat (msec) : 2=0.05%, 4=0.01% 00:18:56.601 cpu : usr=90.38%, sys=8.27%, ctx=7, majf=0, minf=0 00:18:56.601 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.601 issued rwts: total=50648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.601 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:56.601 00:18:56.601 Run status group 0 (all jobs): 00:18:56.601 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=396MiB (415MB), run=10001-10001msec 00:18:56.601 09:09:32 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:56.601 09:09:32 -- target/dif.sh@43 -- # local sub 00:18:56.601 09:09:32 -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.601 09:09:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:56.601 09:09:32 -- target/dif.sh@36 -- # local sub_id=0 00:18:56.601 09:09:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.601 09:09:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:56.601 09:09:32 -- target/dif.sh@36 -- # local sub_id=1 00:18:56.601 09:09:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 ************************************ 00:18:56.601 END TEST fio_dif_1_multi_subsystems 00:18:56.601 ************************************ 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 00:18:56.601 real 0m11.013s 00:18:56.601 user 0m18.769s 00:18:56.601 sys 0m1.906s 00:18:56.601 09:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:56.601 09:09:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:56.601 09:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 ************************************ 00:18:56.601 START TEST fio_dif_rand_params 00:18:56.601 ************************************ 00:18:56.601 09:09:32 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:18:56.601 09:09:32 -- target/dif.sh@100 -- # local NULL_DIF 00:18:56.601 09:09:32 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:56.601 09:09:32 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:56.601 09:09:32 -- target/dif.sh@103 -- # bs=128k 00:18:56.601 09:09:32 -- target/dif.sh@103 -- # numjobs=3 00:18:56.601 09:09:32 -- target/dif.sh@103 -- # iodepth=3 00:18:56.601 09:09:32 -- target/dif.sh@103 -- # runtime=5 00:18:56.601 09:09:32 -- target/dif.sh@105 -- # create_subsystems 0 00:18:56.601 09:09:32 -- target/dif.sh@28 -- # local sub 00:18:56.601 09:09:32 -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.601 09:09:32 -- target/dif.sh@31 -- # create_subsystem 0 00:18:56.601 09:09:32 -- target/dif.sh@18 -- # local sub_id=0 00:18:56.601 09:09:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 bdev_null0 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.601 09:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.601 09:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:56.601 [2024-11-17 09:09:32.670648] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.601 09:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.601 09:09:32 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:56.601 09:09:32 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:56.601 09:09:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:56.601 09:09:32 -- nvmf/common.sh@520 -- # config=() 00:18:56.601 09:09:32 -- nvmf/common.sh@520 -- # local subsystem config 00:18:56.601 09:09:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:56.601 09:09:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:56.601 { 00:18:56.601 "params": { 00:18:56.601 "name": "Nvme$subsystem", 00:18:56.601 "trtype": "$TEST_TRANSPORT", 00:18:56.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.601 "adrfam": "ipv4", 00:18:56.601 "trsvcid": "$NVMF_PORT", 00:18:56.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.601 "hdgst": ${hdgst:-false}, 00:18:56.601 "ddgst": ${ddgst:-false} 00:18:56.601 }, 00:18:56.601 "method": "bdev_nvme_attach_controller" 00:18:56.601 } 00:18:56.601 EOF 00:18:56.601 )") 00:18:56.601 09:09:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.601 09:09:32 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.601 09:09:32 -- target/dif.sh@82 -- # gen_fio_conf 00:18:56.601 09:09:32 -- target/dif.sh@54 -- # local file 00:18:56.601 09:09:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:56.601 09:09:32 -- target/dif.sh@56 -- # cat 00:18:56.601 09:09:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.601 09:09:32 -- nvmf/common.sh@542 -- # cat 00:18:56.601 09:09:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:56.601 09:09:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.601 09:09:32 -- common/autotest_common.sh@1330 -- # shift 00:18:56.601 09:09:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:56.602 09:09:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.602 09:09:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:56.602 09:09:32 -- target/dif.sh@72 -- # (( file <= files )) 00:18:56.602 09:09:32 -- nvmf/common.sh@544 -- # jq . 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:56.602 09:09:32 -- nvmf/common.sh@545 -- # IFS=, 00:18:56.602 09:09:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:56.602 "params": { 00:18:56.602 "name": "Nvme0", 00:18:56.602 "trtype": "tcp", 00:18:56.602 "traddr": "10.0.0.2", 00:18:56.602 "adrfam": "ipv4", 00:18:56.602 "trsvcid": "4420", 00:18:56.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:56.602 "hdgst": false, 00:18:56.602 "ddgst": false 00:18:56.602 }, 00:18:56.602 "method": "bdev_nvme_attach_controller" 00:18:56.602 }' 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:56.602 09:09:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:56.602 09:09:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:56.602 09:09:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:56.602 09:09:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:56.602 09:09:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.602 09:09:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.602 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:56.602 ... 00:18:56.602 fio-3.35 00:18:56.602 Starting 3 threads 00:18:56.602 [2024-11-17 09:09:33.231058] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:56.602 [2024-11-17 09:09:33.231128] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:01.875 00:19:01.875 filename0: (groupid=0, jobs=1): err= 0: pid=74978: Sun Nov 17 09:09:38 2024 00:19:01.875 read: IOPS=270, BW=33.8MiB/s (35.4MB/s)(169MiB/5008msec) 00:19:01.875 slat (nsec): min=6697, max=67661, avg=10553.11, stdev=5245.02 00:19:01.875 clat (usec): min=8050, max=14496, avg=11078.22, stdev=585.54 00:19:01.875 lat (usec): min=8057, max=14511, avg=11088.78, stdev=585.97 00:19:01.875 clat percentiles (usec): 00:19:01.875 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:19:01.875 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:01.875 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[11994], 00:19:01.875 | 99.00th=[12518], 99.50th=[13566], 99.90th=[14484], 99.95th=[14484], 00:19:01.875 | 99.99th=[14484] 00:19:01.875 bw ( KiB/s): min=33792, max=35328, per=33.35%, avg=34552.90, stdev=503.26, samples=10 00:19:01.875 iops : min= 264, max= 276, avg=269.80, stdev= 4.05, samples=10 00:19:01.875 lat (msec) : 10=0.67%, 20=99.33% 00:19:01.875 cpu : usr=91.77%, sys=7.59%, ctx=15, majf=0, minf=0 00:19:01.875 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.875 issued rwts: total=1353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:01.875 filename0: (groupid=0, jobs=1): err= 0: pid=74979: Sun Nov 17 09:09:38 2024 00:19:01.875 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(169MiB/5002msec) 00:19:01.875 slat (nsec): min=6635, max=51711, avg=10011.54, stdev=4473.18 00:19:01.875 clat (usec): min=7903, max=15366, avg=11089.35, stdev=603.07 00:19:01.875 lat (usec): min=7912, max=15382, avg=11099.36, stdev=603.64 00:19:01.875 clat percentiles (usec): 00:19:01.875 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:19:01.875 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:01.875 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:19:01.875 | 99.00th=[12649], 99.50th=[14222], 99.90th=[15401], 99.95th=[15401], 00:19:01.875 | 99.99th=[15401] 00:19:01.876 bw ( KiB/s): min=33792, max=35328, per=33.34%, avg=34537.00, stdev=545.25, samples=9 00:19:01.876 iops : min= 264, max= 276, avg=269.67, stdev= 4.30, samples=9 00:19:01.876 lat (msec) : 10=0.22%, 20=99.78% 00:19:01.876 cpu : usr=92.44%, sys=6.98%, ctx=9, majf=0, minf=0 00:19:01.876 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.876 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:01.876 filename0: (groupid=0, jobs=1): err= 0: pid=74980: Sun Nov 17 09:09:38 2024 00:19:01.876 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5008msec) 00:19:01.876 slat (nsec): min=6710, max=32917, avg=9887.12, stdev=4186.37 00:19:01.876 clat (usec): min=8081, max=18592, avg=11103.83, stdev=680.75 00:19:01.876 lat (usec): min=8088, max=18616, avg=11113.72, stdev=681.03 00:19:01.876 clat percentiles (usec): 00:19:01.876 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:19:01.876 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:01.876 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[11994], 00:19:01.876 | 99.00th=[13435], 99.50th=[14091], 99.90th=[18482], 99.95th=[18482], 00:19:01.876 | 99.99th=[18482] 00:19:01.876 bw ( KiB/s): min=33792, max=35328, per=33.29%, avg=34483.20, stdev=566.68, samples=10 00:19:01.876 iops : min= 264, max= 276, avg=269.40, stdev= 4.43, samples=10 00:19:01.876 lat (msec) : 10=0.44%, 20=99.56% 00:19:01.876 cpu : usr=92.15%, sys=7.31%, ctx=5, majf=0, minf=0 00:19:01.876 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.876 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.876 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:01.876 00:19:01.876 Run status group 0 (all jobs): 00:19:01.876 READ: bw=101MiB/s (106MB/s), 33.7MiB/s-33.8MiB/s (35.3MB/s-35.4MB/s), io=507MiB (531MB), run=5002-5008msec 00:19:01.876 09:09:38 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:01.876 09:09:38 -- target/dif.sh@43 -- # local sub 00:19:01.876 09:09:38 -- target/dif.sh@45 -- # for sub in "$@" 00:19:01.876 09:09:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:01.876 09:09:38 -- target/dif.sh@36 -- # local sub_id=0 00:19:01.876 09:09:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # bs=4k 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # numjobs=8 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # iodepth=16 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # runtime= 00:19:01.876 09:09:38 -- target/dif.sh@109 -- # files=2 00:19:01.876 09:09:38 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:01.876 09:09:38 -- target/dif.sh@28 -- # local sub 00:19:01.876 09:09:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.876 09:09:38 -- target/dif.sh@31 -- # create_subsystem 0 00:19:01.876 09:09:38 -- target/dif.sh@18 -- # local sub_id=0 00:19:01.876 09:09:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 bdev_null0 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 [2024-11-17 09:09:38.589262] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.876 09:09:38 -- target/dif.sh@31 -- # create_subsystem 1 00:19:01.876 09:09:38 -- target/dif.sh@18 -- # local sub_id=1 00:19:01.876 09:09:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 bdev_null1 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.876 09:09:38 -- target/dif.sh@31 -- # create_subsystem 2 00:19:01.876 09:09:38 -- target/dif.sh@18 -- # local sub_id=2 00:19:01.876 09:09:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 bdev_null2 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:01.876 09:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:09:38 -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:09:38 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:01.876 09:09:38 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:01.876 09:09:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:01.876 09:09:38 -- nvmf/common.sh@520 -- # config=() 00:19:01.876 09:09:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:01.876 09:09:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.876 09:09:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:01.876 09:09:38 -- target/dif.sh@82 -- # gen_fio_conf 00:19:01.876 09:09:38 -- target/dif.sh@54 -- # local file 00:19:01.876 09:09:38 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.876 09:09:38 -- target/dif.sh@56 -- # cat 00:19:01.876 09:09:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:01.876 09:09:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.876 09:09:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:01.876 { 00:19:01.876 "params": { 00:19:01.876 "name": "Nvme$subsystem", 00:19:01.876 "trtype": "$TEST_TRANSPORT", 00:19:01.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.876 "adrfam": "ipv4", 00:19:01.876 "trsvcid": "$NVMF_PORT", 00:19:01.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.876 "hdgst": ${hdgst:-false}, 00:19:01.876 "ddgst": ${ddgst:-false} 00:19:01.876 }, 00:19:01.876 "method": "bdev_nvme_attach_controller" 00:19:01.876 } 00:19:01.876 EOF 00:19:01.876 )") 00:19:01.876 09:09:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:01.876 09:09:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.876 09:09:38 -- common/autotest_common.sh@1330 -- # shift 00:19:01.876 09:09:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:01.876 09:09:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.876 09:09:38 -- nvmf/common.sh@542 -- # cat 00:19:01.876 09:09:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:01.876 09:09:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.876 09:09:38 -- target/dif.sh@73 -- # cat 00:19:01.876 09:09:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.876 09:09:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:01.876 09:09:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:01.876 09:09:38 -- target/dif.sh@72 -- # (( file++ )) 00:19:01.876 09:09:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:01.877 09:09:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.877 09:09:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:01.877 { 00:19:01.877 "params": { 00:19:01.877 "name": "Nvme$subsystem", 00:19:01.877 "trtype": "$TEST_TRANSPORT", 00:19:01.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.877 "adrfam": "ipv4", 00:19:01.877 "trsvcid": "$NVMF_PORT", 00:19:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.877 "hdgst": ${hdgst:-false}, 00:19:01.877 "ddgst": ${ddgst:-false} 00:19:01.877 }, 00:19:01.877 "method": "bdev_nvme_attach_controller" 00:19:01.877 } 00:19:01.877 EOF 00:19:01.877 )") 00:19:01.877 09:09:38 -- target/dif.sh@73 -- # cat 00:19:01.877 09:09:38 -- nvmf/common.sh@542 -- # cat 00:19:01.877 09:09:38 -- target/dif.sh@72 -- # (( file++ )) 00:19:01.877 09:09:38 -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.877 09:09:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:01.877 09:09:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:01.877 { 00:19:01.877 "params": { 00:19:01.877 "name": "Nvme$subsystem", 00:19:01.877 "trtype": "$TEST_TRANSPORT", 00:19:01.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.877 "adrfam": "ipv4", 00:19:01.877 "trsvcid": "$NVMF_PORT", 00:19:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.877 "hdgst": ${hdgst:-false}, 00:19:01.877 "ddgst": ${ddgst:-false} 00:19:01.877 }, 00:19:01.877 "method": "bdev_nvme_attach_controller" 00:19:01.877 } 00:19:01.877 EOF 00:19:01.877 )") 00:19:01.877 09:09:38 -- nvmf/common.sh@542 -- # cat 00:19:01.877 09:09:38 -- nvmf/common.sh@544 -- # jq . 00:19:01.877 09:09:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:01.877 09:09:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:01.877 "params": { 00:19:01.877 "name": "Nvme0", 00:19:01.877 "trtype": "tcp", 00:19:01.877 "traddr": "10.0.0.2", 00:19:01.877 "adrfam": "ipv4", 00:19:01.877 "trsvcid": "4420", 00:19:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:01.877 "hdgst": false, 00:19:01.877 "ddgst": false 00:19:01.877 }, 00:19:01.877 "method": "bdev_nvme_attach_controller" 00:19:01.877 },{ 00:19:01.877 "params": { 00:19:01.877 "name": "Nvme1", 00:19:01.877 "trtype": "tcp", 00:19:01.877 "traddr": "10.0.0.2", 00:19:01.877 "adrfam": "ipv4", 00:19:01.877 "trsvcid": "4420", 00:19:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.877 "hdgst": false, 00:19:01.877 "ddgst": false 00:19:01.877 }, 00:19:01.877 "method": "bdev_nvme_attach_controller" 00:19:01.877 },{ 00:19:01.877 "params": { 00:19:01.877 "name": "Nvme2", 00:19:01.877 "trtype": "tcp", 00:19:01.877 "traddr": "10.0.0.2", 00:19:01.877 "adrfam": "ipv4", 00:19:01.877 "trsvcid": "4420", 00:19:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.877 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.877 "hdgst": false, 00:19:01.877 "ddgst": false 00:19:01.877 }, 00:19:01.877 "method": "bdev_nvme_attach_controller" 00:19:01.877 }' 00:19:01.877 09:09:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:01.877 09:09:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:01.877 09:09:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.877 09:09:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.877 09:09:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:01.877 09:09:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:01.877 09:09:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:01.877 09:09:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:01.877 09:09:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.877 09:09:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:02.136 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:02.136 ... 00:19:02.136 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:02.136 ... 00:19:02.136 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:02.136 ... 00:19:02.136 fio-3.35 00:19:02.136 Starting 24 threads 00:19:02.708 [2024-11-17 09:09:39.342884] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:02.708 [2024-11-17 09:09:39.342946] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:12.678 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75077: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=192, BW=770KiB/s (789kB/s)(7728KiB/10032msec) 00:19:12.678 slat (usec): min=4, max=8025, avg=19.02, stdev=182.31 00:19:12.678 clat (msec): min=35, max=155, avg=82.93, stdev=21.69 00:19:12.678 lat (msec): min=35, max=155, avg=82.95, stdev=21.69 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:19:12.678 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 92], 00:19:12.678 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 121], 00:19:12.678 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:19:12.678 | 99.99th=[ 157] 00:19:12.678 bw ( KiB/s): min= 528, max= 952, per=4.00%, avg=766.30, stdev=133.70, samples=20 00:19:12.678 iops : min= 132, max= 238, avg=191.55, stdev=33.43, samples=20 00:19:12.678 lat (msec) : 50=8.90%, 100=66.56%, 250=24.53% 00:19:12.678 cpu : usr=35.65%, sys=1.90%, ctx=1339, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=88.5%, 8=11.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75078: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=203, BW=815KiB/s (835kB/s)(8160KiB/10011msec) 00:19:12.678 slat (usec): min=8, max=6356, avg=18.51, stdev=142.30 00:19:12.678 clat (msec): min=10, max=171, avg=78.43, stdev=23.12 00:19:12.678 lat (msec): min=10, max=171, avg=78.45, stdev=23.12 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:19:12.678 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:19:12.678 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 114], 00:19:12.678 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 171], 00:19:12.678 | 99.99th=[ 171] 00:19:12.678 bw ( KiB/s): min= 624, max= 1024, per=4.17%, avg=800.68, stdev=139.38, samples=19 00:19:12.678 iops : min= 156, max= 256, avg=200.11, stdev=34.80, samples=19 00:19:12.678 lat (msec) : 20=0.25%, 50=15.00%, 100=64.46%, 250=20.29% 00:19:12.678 cpu : usr=37.97%, sys=1.88%, ctx=1343, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75079: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=199, BW=797KiB/s (816kB/s)(7980KiB/10016msec) 00:19:12.678 slat (usec): min=3, max=8031, avg=26.14, stdev=310.68 00:19:12.678 clat (msec): min=22, max=168, avg=80.23, stdev=22.14 00:19:12.678 lat (msec): min=22, max=168, avg=80.26, stdev=22.16 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.678 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:19:12.678 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 112], 00:19:12.678 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:19:12.678 | 99.99th=[ 169] 00:19:12.678 bw ( KiB/s): min= 542, max= 1016, per=4.13%, avg=791.50, stdev=145.86, samples=20 00:19:12.678 iops : min= 135, max= 254, avg=197.85, stdev=36.51, samples=20 00:19:12.678 lat (msec) : 50=13.08%, 100=65.71%, 250=21.20% 00:19:12.678 cpu : usr=33.79%, sys=1.75%, ctx=945, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75080: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=200, BW=804KiB/s (823kB/s)(8056KiB/10025msec) 00:19:12.678 slat (usec): min=4, max=12024, avg=24.26, stdev=295.93 00:19:12.678 clat (msec): min=32, max=138, avg=79.45, stdev=21.25 00:19:12.678 lat (msec): min=32, max=138, avg=79.47, stdev=21.27 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:19:12.678 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:19:12.678 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 113], 00:19:12.678 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 138], 00:19:12.678 | 99.99th=[ 140] 00:19:12.678 bw ( KiB/s): min= 640, max= 1024, per=4.18%, avg=801.50, stdev=130.08, samples=20 00:19:12.678 iops : min= 160, max= 256, avg=200.35, stdev=32.53, samples=20 00:19:12.678 lat (msec) : 50=10.08%, 100=69.22%, 250=20.71% 00:19:12.678 cpu : usr=41.88%, sys=2.29%, ctx=1216, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75081: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=203, BW=814KiB/s (834kB/s)(8144KiB/10003msec) 00:19:12.678 slat (usec): min=6, max=8024, avg=21.05, stdev=199.65 00:19:12.678 clat (msec): min=2, max=230, avg=78.51, stdev=26.72 00:19:12.678 lat (msec): min=2, max=230, avg=78.53, stdev=26.72 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:19:12.678 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:19:12.678 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 118], 00:19:12.678 | 99.00th=[ 138], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 230], 00:19:12.678 | 99.99th=[ 230] 00:19:12.678 bw ( KiB/s): min= 512, max= 1048, per=4.09%, avg=783.21, stdev=166.63, samples=19 00:19:12.678 iops : min= 128, max= 262, avg=195.74, stdev=41.69, samples=19 00:19:12.678 lat (msec) : 4=0.93%, 10=0.93%, 50=14.64%, 100=62.13%, 250=21.37% 00:19:12.678 cpu : usr=42.53%, sys=2.27%, ctx=1202, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75082: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=209, BW=836KiB/s (856kB/s)(8368KiB/10005msec) 00:19:12.678 slat (usec): min=4, max=8024, avg=32.37, stdev=308.69 00:19:12.678 clat (msec): min=5, max=237, avg=76.36, stdev=24.10 00:19:12.678 lat (msec): min=5, max=237, avg=76.39, stdev=24.10 00:19:12.678 clat percentiles (msec): 00:19:12.678 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 55], 00:19:12.678 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 81], 00:19:12.678 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 112], 00:19:12.678 | 99.00th=[ 122], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 239], 00:19:12.678 | 99.99th=[ 239] 00:19:12.678 bw ( KiB/s): min= 512, max= 1024, per=4.26%, avg=817.16, stdev=149.69, samples=19 00:19:12.678 iops : min= 128, max= 256, avg=204.26, stdev=37.44, samples=19 00:19:12.678 lat (msec) : 10=0.48%, 20=0.14%, 50=15.11%, 100=66.54%, 250=17.73% 00:19:12.678 cpu : usr=42.80%, sys=2.49%, ctx=1289, majf=0, minf=9 00:19:12.678 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.678 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.678 filename0: (groupid=0, jobs=1): err= 0: pid=75083: Sun Nov 17 09:09:49 2024 00:19:12.678 read: IOPS=201, BW=805KiB/s (825kB/s)(8060KiB/10008msec) 00:19:12.678 slat (usec): min=3, max=4026, avg=23.33, stdev=178.59 00:19:12.679 clat (msec): min=25, max=170, avg=79.35, stdev=23.54 00:19:12.679 lat (msec): min=25, max=170, avg=79.37, stdev=23.54 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:19:12.679 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:19:12.679 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 118], 00:19:12.679 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 171], 00:19:12.679 | 99.99th=[ 171] 00:19:12.679 bw ( KiB/s): min= 512, max= 1015, per=4.14%, avg=793.11, stdev=149.81, samples=19 00:19:12.679 iops : min= 128, max= 253, avg=198.21, stdev=37.42, samples=19 00:19:12.679 lat (msec) : 50=13.70%, 100=65.71%, 250=20.60% 00:19:12.679 cpu : usr=40.98%, sys=2.45%, ctx=1202, majf=0, minf=9 00:19:12.679 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename0: (groupid=0, jobs=1): err= 0: pid=75084: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=198, BW=796KiB/s (815kB/s)(7992KiB/10045msec) 00:19:12.679 slat (nsec): min=4940, max=41164, avg=13641.44, stdev=4993.83 00:19:12.679 clat (msec): min=5, max=138, avg=80.29, stdev=23.22 00:19:12.679 lat (msec): min=5, max=138, avg=80.31, stdev=23.22 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:19:12.679 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:19:12.679 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 112], 00:19:12.679 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 138], 99.95th=[ 140], 00:19:12.679 | 99.99th=[ 140] 00:19:12.679 bw ( KiB/s): min= 584, max= 1142, per=4.13%, avg=792.10, stdev=144.27, samples=20 00:19:12.679 iops : min= 146, max= 285, avg=197.95, stdev=36.02, samples=20 00:19:12.679 lat (msec) : 10=2.40%, 50=8.31%, 100=69.02%, 250=20.27% 00:19:12.679 cpu : usr=34.60%, sys=2.00%, ctx=993, majf=0, minf=9 00:19:12.679 IO depths : 1=0.2%, 2=1.0%, 4=3.6%, 8=78.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=1998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75085: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=219, BW=877KiB/s (898kB/s)(8772KiB/10003msec) 00:19:12.679 slat (usec): min=6, max=8027, avg=21.51, stdev=241.93 00:19:12.679 clat (usec): min=943, max=226847, avg=72877.37, stdev=27544.44 00:19:12.679 lat (usec): min=950, max=226866, avg=72898.88, stdev=27542.93 00:19:12.679 clat percentiles (usec): 00:19:12.679 | 1.00th=[ 1352], 5.00th=[ 12387], 10.00th=[ 46400], 20.00th=[ 51643], 00:19:12.679 | 30.00th=[ 60031], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 76022], 00:19:12.679 | 70.00th=[ 85459], 80.00th=[ 95945], 90.00th=[106431], 95.00th=[111674], 00:19:12.679 | 99.00th=[122160], 99.50th=[179307], 99.90th=[179307], 99.95th=[227541], 00:19:12.679 | 99.99th=[227541] 00:19:12.679 bw ( KiB/s): min= 507, max= 1024, per=4.28%, avg=821.68, stdev=140.03, samples=19 00:19:12.679 iops : min= 126, max= 256, avg=205.32, stdev=35.07, samples=19 00:19:12.679 lat (usec) : 1000=0.14% 00:19:12.679 lat (msec) : 2=1.92%, 4=2.60%, 10=0.09%, 20=0.27%, 50=14.00% 00:19:12.679 lat (msec) : 100=64.84%, 250=16.14% 00:19:12.679 cpu : usr=36.72%, sys=2.29%, ctx=1066, majf=0, minf=9 00:19:12.679 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75086: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=199, BW=797KiB/s (816kB/s)(8008KiB/10047msec) 00:19:12.679 slat (usec): min=3, max=8030, avg=17.23, stdev=179.25 00:19:12.679 clat (usec): min=1909, max=143935, avg=80149.86, stdev=24241.24 00:19:12.679 lat (usec): min=1913, max=143945, avg=80167.09, stdev=24238.90 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 3], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 63], 00:19:12.679 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 88], 00:19:12.679 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 112], 00:19:12.679 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 136], 00:19:12.679 | 99.99th=[ 144] 00:19:12.679 bw ( KiB/s): min= 576, max= 1269, per=4.14%, avg=793.65, stdev=165.91, samples=20 00:19:12.679 iops : min= 144, max= 317, avg=198.35, stdev=41.49, samples=20 00:19:12.679 lat (msec) : 2=0.70%, 4=0.90%, 10=1.60%, 50=6.59%, 100=66.43% 00:19:12.679 lat (msec) : 250=23.78% 00:19:12.679 cpu : usr=43.59%, sys=2.45%, ctx=1669, majf=0, minf=0 00:19:12.679 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=76.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=89.4%, 8=9.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75087: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=199, BW=799KiB/s (818kB/s)(8000KiB/10017msec) 00:19:12.679 slat (usec): min=3, max=8032, avg=22.37, stdev=253.46 00:19:12.679 clat (msec): min=24, max=164, avg=80.01, stdev=21.08 00:19:12.679 lat (msec): min=24, max=164, avg=80.03, stdev=21.08 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.679 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:19:12.679 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 111], 00:19:12.679 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 165], 99.95th=[ 165], 00:19:12.679 | 99.99th=[ 165] 00:19:12.679 bw ( KiB/s): min= 640, max= 1016, per=4.15%, avg=795.90, stdev=133.94, samples=20 00:19:12.679 iops : min= 160, max= 254, avg=198.95, stdev=33.52, samples=20 00:19:12.679 lat (msec) : 50=11.65%, 100=68.55%, 250=19.80% 00:19:12.679 cpu : usr=31.91%, sys=1.73%, ctx=855, majf=0, minf=9 00:19:12.679 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75088: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=200, BW=801KiB/s (821kB/s)(8016KiB/10003msec) 00:19:12.679 slat (usec): min=3, max=8032, avg=33.16, stdev=346.42 00:19:12.679 clat (msec): min=5, max=167, avg=79.68, stdev=24.32 00:19:12.679 lat (msec): min=5, max=167, avg=79.71, stdev=24.33 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.679 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:19:12.679 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 117], 00:19:12.679 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 167], 00:19:12.679 | 99.99th=[ 167] 00:19:12.679 bw ( KiB/s): min= 496, max= 1056, per=4.09%, avg=784.32, stdev=160.85, samples=19 00:19:12.679 iops : min= 124, max= 264, avg=196.05, stdev=40.24, samples=19 00:19:12.679 lat (msec) : 10=0.50%, 20=0.30%, 50=13.77%, 100=63.77%, 250=21.66% 00:19:12.679 cpu : usr=36.81%, sys=2.06%, ctx=891, majf=0, minf=9 00:19:12.679 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75089: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=196, BW=787KiB/s (806kB/s)(7900KiB/10042msec) 00:19:12.679 slat (usec): min=4, max=8022, avg=21.95, stdev=254.83 00:19:12.679 clat (msec): min=5, max=155, avg=81.18, stdev=22.53 00:19:12.679 lat (msec): min=5, max=155, avg=81.21, stdev=22.54 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:19:12.679 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:19:12.679 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 111], 00:19:12.679 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 157], 00:19:12.679 | 99.99th=[ 157] 00:19:12.679 bw ( KiB/s): min= 632, max= 976, per=4.09%, avg=783.45, stdev=119.53, samples=20 00:19:12.679 iops : min= 158, max= 244, avg=195.80, stdev=29.91, samples=20 00:19:12.679 lat (msec) : 10=1.62%, 50=8.71%, 100=67.29%, 250=22.38% 00:19:12.679 cpu : usr=40.10%, sys=2.25%, ctx=928, majf=0, minf=9 00:19:12.679 IO depths : 1=0.2%, 2=1.4%, 4=5.0%, 8=77.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:12.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.679 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.679 filename1: (groupid=0, jobs=1): err= 0: pid=75090: Sun Nov 17 09:09:49 2024 00:19:12.679 read: IOPS=197, BW=791KiB/s (810kB/s)(7936KiB/10030msec) 00:19:12.679 slat (usec): min=6, max=5034, avg=18.58, stdev=144.30 00:19:12.679 clat (msec): min=22, max=154, avg=80.74, stdev=22.82 00:19:12.679 lat (msec): min=22, max=154, avg=80.76, stdev=22.82 00:19:12.679 clat percentiles (msec): 00:19:12.679 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.679 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 89], 00:19:12.679 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 115], 00:19:12.679 | 99.00th=[ 128], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 155], 00:19:12.679 | 99.99th=[ 155] 00:19:12.679 bw ( KiB/s): min= 528, max= 1016, per=4.10%, avg=786.95, stdev=149.82, samples=20 00:19:12.680 iops : min= 132, max= 254, avg=196.70, stdev=37.48, samples=20 00:19:12.680 lat (msec) : 50=13.76%, 100=61.79%, 250=24.45% 00:19:12.680 cpu : usr=42.09%, sys=2.32%, ctx=1195, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename1: (groupid=0, jobs=1): err= 0: pid=75091: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=195, BW=782KiB/s (801kB/s)(7848KiB/10030msec) 00:19:12.680 slat (usec): min=7, max=8025, avg=17.59, stdev=180.93 00:19:12.680 clat (msec): min=22, max=143, avg=81.66, stdev=21.26 00:19:12.680 lat (msec): min=22, max=143, avg=81.67, stdev=21.26 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:19:12.680 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 88], 00:19:12.680 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 111], 00:19:12.680 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:12.680 | 99.99th=[ 144] 00:19:12.680 bw ( KiB/s): min= 632, max= 1024, per=4.06%, avg=778.15, stdev=119.63, samples=20 00:19:12.680 iops : min= 158, max= 256, avg=194.50, stdev=29.92, samples=20 00:19:12.680 lat (msec) : 50=8.77%, 100=68.45%, 250=22.78% 00:19:12.680 cpu : usr=32.42%, sys=1.79%, ctx=831, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=79.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename1: (groupid=0, jobs=1): err= 0: pid=75092: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=205, BW=821KiB/s (841kB/s)(8216KiB/10006msec) 00:19:12.680 slat (usec): min=4, max=8022, avg=22.34, stdev=216.51 00:19:12.680 clat (msec): min=12, max=238, avg=77.81, stdev=24.37 00:19:12.680 lat (msec): min=12, max=238, avg=77.83, stdev=24.37 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:19:12.680 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:19:12.680 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 117], 00:19:12.680 | 99.00th=[ 129], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 239], 00:19:12.680 | 99.99th=[ 239] 00:19:12.680 bw ( KiB/s): min= 512, max= 1024, per=4.20%, avg=804.11, stdev=152.50, samples=19 00:19:12.680 iops : min= 128, max= 256, avg=201.00, stdev=38.15, samples=19 00:19:12.680 lat (msec) : 20=0.29%, 50=15.19%, 100=67.43%, 250=17.09% 00:19:12.680 cpu : usr=36.42%, sys=2.17%, ctx=1000, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75093: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=194, BW=778KiB/s (797kB/s)(7808KiB/10032msec) 00:19:12.680 slat (nsec): min=4246, max=38278, avg=13427.18, stdev=4763.62 00:19:12.680 clat (msec): min=22, max=155, avg=82.11, stdev=21.31 00:19:12.680 lat (msec): min=22, max=155, avg=82.13, stdev=21.31 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:19:12.680 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 91], 00:19:12.680 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 108], 95.00th=[ 111], 00:19:12.680 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 157], 00:19:12.680 | 99.99th=[ 157] 00:19:12.680 bw ( KiB/s): min= 608, max= 992, per=4.04%, avg=774.15, stdev=117.11, samples=20 00:19:12.680 iops : min= 152, max= 248, avg=193.50, stdev=29.29, samples=20 00:19:12.680 lat (msec) : 50=7.68%, 100=69.77%, 250=22.54% 00:19:12.680 cpu : usr=33.17%, sys=1.88%, ctx=932, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=80.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75094: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=197, BW=789KiB/s (808kB/s)(7916KiB/10033msec) 00:19:12.680 slat (usec): min=6, max=8022, avg=23.21, stdev=261.68 00:19:12.680 clat (msec): min=30, max=155, avg=80.94, stdev=21.19 00:19:12.680 lat (msec): min=30, max=155, avg=80.97, stdev=21.19 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:19:12.680 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 86], 00:19:12.680 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 110], 00:19:12.680 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 157], 00:19:12.680 | 99.99th=[ 157] 00:19:12.680 bw ( KiB/s): min= 632, max= 992, per=4.09%, avg=784.90, stdev=122.88, samples=20 00:19:12.680 iops : min= 158, max= 248, avg=196.20, stdev=30.73, samples=20 00:19:12.680 lat (msec) : 50=10.66%, 100=68.62%, 250=20.72% 00:19:12.680 cpu : usr=34.95%, sys=1.75%, ctx=1082, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=79.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75095: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=203, BW=813KiB/s (832kB/s)(8136KiB/10008msec) 00:19:12.680 slat (usec): min=4, max=8023, avg=21.98, stdev=203.49 00:19:12.680 clat (msec): min=25, max=170, avg=78.62, stdev=22.03 00:19:12.680 lat (msec): min=25, max=170, avg=78.64, stdev=22.03 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.680 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:19:12.680 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 111], 00:19:12.680 | 99.00th=[ 130], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 171], 00:19:12.680 | 99.99th=[ 171] 00:19:12.680 bw ( KiB/s): min= 624, max= 1015, per=4.17%, avg=800.68, stdev=131.64, samples=19 00:19:12.680 iops : min= 156, max= 253, avg=200.11, stdev=32.84, samples=19 00:19:12.680 lat (msec) : 50=13.23%, 100=68.98%, 250=17.80% 00:19:12.680 cpu : usr=35.19%, sys=1.60%, ctx=1201, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75096: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=194, BW=778KiB/s (796kB/s)(7796KiB/10026msec) 00:19:12.680 slat (usec): min=5, max=4025, avg=24.07, stdev=193.69 00:19:12.680 clat (msec): min=33, max=149, avg=82.09, stdev=22.68 00:19:12.680 lat (msec): min=33, max=149, avg=82.12, stdev=22.68 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:19:12.680 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 88], 00:19:12.680 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 121], 00:19:12.680 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:19:12.680 | 99.99th=[ 150] 00:19:12.680 bw ( KiB/s): min= 528, max= 1024, per=4.03%, avg=773.10, stdev=153.29, samples=20 00:19:12.680 iops : min= 132, max= 256, avg=193.25, stdev=38.33, samples=20 00:19:12.680 lat (msec) : 50=10.01%, 100=64.60%, 250=25.40% 00:19:12.680 cpu : usr=38.40%, sys=2.03%, ctx=1160, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75097: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=197, BW=792KiB/s (811kB/s)(7932KiB/10021msec) 00:19:12.680 slat (usec): min=3, max=4027, avg=19.25, stdev=139.81 00:19:12.680 clat (msec): min=32, max=179, avg=80.72, stdev=23.12 00:19:12.680 lat (msec): min=32, max=179, avg=80.74, stdev=23.12 00:19:12.680 clat percentiles (msec): 00:19:12.680 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:19:12.680 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 87], 00:19:12.680 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 121], 00:19:12.680 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 180], 99.95th=[ 180], 00:19:12.680 | 99.99th=[ 180] 00:19:12.680 bw ( KiB/s): min= 512, max= 1024, per=4.12%, avg=789.10, stdev=164.87, samples=20 00:19:12.680 iops : min= 128, max= 256, avg=197.25, stdev=41.22, samples=20 00:19:12.680 lat (msec) : 50=11.70%, 100=64.75%, 250=23.55% 00:19:12.680 cpu : usr=36.59%, sys=1.90%, ctx=1248, majf=0, minf=9 00:19:12.680 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:12.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.680 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.680 filename2: (groupid=0, jobs=1): err= 0: pid=75098: Sun Nov 17 09:09:49 2024 00:19:12.680 read: IOPS=198, BW=792KiB/s (811kB/s)(7944KiB/10026msec) 00:19:12.680 slat (usec): min=4, max=8032, avg=28.25, stdev=323.94 00:19:12.681 clat (msec): min=35, max=147, avg=80.61, stdev=20.70 00:19:12.681 lat (msec): min=35, max=147, avg=80.63, stdev=20.70 00:19:12.681 clat percentiles (msec): 00:19:12.681 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:19:12.681 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:19:12.681 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 117], 00:19:12.681 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 148], 00:19:12.681 | 99.99th=[ 148] 00:19:12.681 bw ( KiB/s): min= 608, max= 1000, per=4.12%, avg=790.70, stdev=122.31, samples=20 00:19:12.681 iops : min= 152, max= 250, avg=197.65, stdev=30.57, samples=20 00:19:12.681 lat (msec) : 50=8.76%, 100=72.00%, 250=19.23% 00:19:12.681 cpu : usr=33.88%, sys=1.90%, ctx=962, majf=0, minf=9 00:19:12.681 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:12.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.681 filename2: (groupid=0, jobs=1): err= 0: pid=75099: Sun Nov 17 09:09:49 2024 00:19:12.681 read: IOPS=202, BW=812KiB/s (831kB/s)(8124KiB/10010msec) 00:19:12.681 slat (usec): min=3, max=8030, avg=24.10, stdev=260.15 00:19:12.681 clat (msec): min=9, max=176, avg=78.75, stdev=23.56 00:19:12.681 lat (msec): min=12, max=176, avg=78.77, stdev=23.56 00:19:12.681 clat percentiles (msec): 00:19:12.681 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:12.681 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:19:12.681 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 118], 00:19:12.681 | 99.00th=[ 132], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 176], 00:19:12.681 | 99.99th=[ 176] 00:19:12.681 bw ( KiB/s): min= 496, max= 1015, per=4.15%, avg=796.05, stdev=143.32, samples=19 00:19:12.681 iops : min= 124, max= 253, avg=198.95, stdev=35.76, samples=19 00:19:12.681 lat (msec) : 10=0.05%, 50=13.98%, 100=65.48%, 250=20.48% 00:19:12.681 cpu : usr=34.05%, sys=1.58%, ctx=985, majf=0, minf=10 00:19:12.681 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:12.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.681 filename2: (groupid=0, jobs=1): err= 0: pid=75100: Sun Nov 17 09:09:49 2024 00:19:12.681 read: IOPS=192, BW=768KiB/s (787kB/s)(7704KiB/10026msec) 00:19:12.681 slat (usec): min=3, max=8038, avg=24.74, stdev=249.23 00:19:12.681 clat (msec): min=36, max=146, avg=83.07, stdev=21.99 00:19:12.681 lat (msec): min=36, max=146, avg=83.09, stdev=21.99 00:19:12.681 clat percentiles (msec): 00:19:12.681 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 66], 00:19:12.681 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 91], 00:19:12.681 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 118], 00:19:12.681 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:19:12.681 | 99.99th=[ 146] 00:19:12.681 bw ( KiB/s): min= 528, max= 992, per=4.00%, avg=766.35, stdev=140.07, samples=20 00:19:12.681 iops : min= 132, max= 248, avg=191.55, stdev=35.03, samples=20 00:19:12.681 lat (msec) : 50=8.72%, 100=65.68%, 250=25.60% 00:19:12.681 cpu : usr=39.19%, sys=2.05%, ctx=1271, majf=0, minf=9 00:19:12.681 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:12.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.681 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:12.681 00:19:12.681 Run status group 0 (all jobs): 00:19:12.681 READ: bw=18.7MiB/s (19.6MB/s), 768KiB/s-877KiB/s (787kB/s-898kB/s), io=188MiB (197MB), run=10003-10047msec 00:19:12.940 09:09:49 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:12.940 09:09:49 -- target/dif.sh@43 -- # local sub 00:19:12.940 09:09:49 -- target/dif.sh@45 -- # for sub in "$@" 00:19:12.940 09:09:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:12.940 09:09:49 -- target/dif.sh@36 -- # local sub_id=0 00:19:12.940 09:09:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.940 09:09:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.940 09:09:49 -- target/dif.sh@45 -- # for sub in "$@" 00:19:12.940 09:09:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:12.940 09:09:49 -- target/dif.sh@36 -- # local sub_id=1 00:19:12.940 09:09:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.940 09:09:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.940 09:09:49 -- target/dif.sh@45 -- # for sub in "$@" 00:19:12.940 09:09:49 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:12.940 09:09:49 -- target/dif.sh@36 -- # local sub_id=2 00:19:12.940 09:09:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.940 09:09:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:12.940 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.940 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # numjobs=2 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # iodepth=8 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # runtime=5 00:19:12.941 09:09:49 -- target/dif.sh@115 -- # files=1 00:19:12.941 09:09:49 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:12.941 09:09:49 -- target/dif.sh@28 -- # local sub 00:19:12.941 09:09:49 -- target/dif.sh@30 -- # for sub in "$@" 00:19:12.941 09:09:49 -- target/dif.sh@31 -- # create_subsystem 0 00:19:12.941 09:09:49 -- target/dif.sh@18 -- # local sub_id=0 00:19:12.941 09:09:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 bdev_null0 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 [2024-11-17 09:09:49.798231] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@30 -- # for sub in "$@" 00:19:12.941 09:09:49 -- target/dif.sh@31 -- # create_subsystem 1 00:19:12.941 09:09:49 -- target/dif.sh@18 -- # local sub_id=1 00:19:12.941 09:09:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 bdev_null1 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.941 09:09:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.941 09:09:49 -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 09:09:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.941 09:09:49 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:12.941 09:09:49 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:12.941 09:09:49 -- target/dif.sh@82 -- # gen_fio_conf 00:19:12.941 09:09:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:12.941 09:09:49 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:12.941 09:09:49 -- target/dif.sh@54 -- # local file 00:19:12.941 09:09:49 -- target/dif.sh@56 -- # cat 00:19:12.941 09:09:49 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:12.941 09:09:49 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:12.941 09:09:49 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:12.941 09:09:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:12.941 09:09:49 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:12.941 09:09:49 -- common/autotest_common.sh@1330 -- # shift 00:19:12.941 09:09:49 -- nvmf/common.sh@520 -- # config=() 00:19:12.941 09:09:49 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:12.941 09:09:49 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.941 09:09:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:12.941 09:09:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.941 09:09:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.941 { 00:19:12.941 "params": { 00:19:12.941 "name": "Nvme$subsystem", 00:19:12.941 "trtype": "$TEST_TRANSPORT", 00:19:12.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.941 "adrfam": "ipv4", 00:19:12.941 "trsvcid": "$NVMF_PORT", 00:19:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.941 "hdgst": ${hdgst:-false}, 00:19:12.941 "ddgst": ${ddgst:-false} 00:19:12.941 }, 00:19:12.941 "method": "bdev_nvme_attach_controller" 00:19:12.941 } 00:19:12.941 EOF 00:19:12.941 )") 00:19:12.941 09:09:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:12.941 09:09:49 -- target/dif.sh@72 -- # (( file <= files )) 00:19:12.941 09:09:49 -- target/dif.sh@73 -- # cat 00:19:12.941 09:09:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:12.941 09:09:49 -- nvmf/common.sh@542 -- # cat 00:19:12.941 09:09:49 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:12.941 09:09:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:12.941 09:09:49 -- target/dif.sh@72 -- # (( file++ )) 00:19:12.941 09:09:49 -- target/dif.sh@72 -- # (( file <= files )) 00:19:12.941 09:09:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.941 09:09:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.941 { 00:19:12.941 "params": { 00:19:12.941 "name": "Nvme$subsystem", 00:19:12.941 "trtype": "$TEST_TRANSPORT", 00:19:12.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.941 "adrfam": "ipv4", 00:19:12.941 "trsvcid": "$NVMF_PORT", 00:19:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.941 "hdgst": ${hdgst:-false}, 00:19:12.941 "ddgst": ${ddgst:-false} 00:19:12.941 }, 00:19:12.941 "method": "bdev_nvme_attach_controller" 00:19:12.941 } 00:19:12.941 EOF 00:19:12.941 )") 00:19:12.941 09:09:49 -- nvmf/common.sh@542 -- # cat 00:19:12.941 09:09:49 -- nvmf/common.sh@544 -- # jq . 00:19:12.941 09:09:49 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.941 09:09:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.941 "params": { 00:19:12.941 "name": "Nvme0", 00:19:12.941 "trtype": "tcp", 00:19:12.941 "traddr": "10.0.0.2", 00:19:12.941 "adrfam": "ipv4", 00:19:12.941 "trsvcid": "4420", 00:19:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:12.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:12.941 "hdgst": false, 00:19:12.941 "ddgst": false 00:19:12.941 }, 00:19:12.941 "method": "bdev_nvme_attach_controller" 00:19:12.941 },{ 00:19:12.941 "params": { 00:19:12.941 "name": "Nvme1", 00:19:12.941 "trtype": "tcp", 00:19:12.941 "traddr": "10.0.0.2", 00:19:12.941 "adrfam": "ipv4", 00:19:12.941 "trsvcid": "4420", 00:19:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.941 "hdgst": false, 00:19:12.941 "ddgst": false 00:19:12.941 }, 00:19:12.941 "method": "bdev_nvme_attach_controller" 00:19:12.941 }' 00:19:12.941 09:09:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:12.941 09:09:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:12.941 09:09:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:12.941 09:09:49 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:13.200 09:09:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.200 09:09:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:13.200 09:09:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:13.200 09:09:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:13.200 09:09:49 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.200 09:09:49 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:13.200 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:13.200 ... 00:19:13.200 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:13.200 ... 00:19:13.200 fio-3.35 00:19:13.200 Starting 4 threads 00:19:13.768 [2024-11-17 09:09:50.420727] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:13.768 [2024-11-17 09:09:50.420797] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:19.039 00:19:19.039 filename0: (groupid=0, jobs=1): err= 0: pid=75252: Sun Nov 17 09:09:55 2024 00:19:19.039 read: IOPS=2166, BW=16.9MiB/s (17.7MB/s)(84.7MiB/5002msec) 00:19:19.039 slat (nsec): min=7297, max=67252, avg=15318.74, stdev=4842.46 00:19:19.039 clat (usec): min=1363, max=6817, avg=3650.10, stdev=965.71 00:19:19.039 lat (usec): min=1372, max=6832, avg=3665.42, stdev=965.64 00:19:19.039 clat percentiles (usec): 00:19:19.039 | 1.00th=[ 1909], 5.00th=[ 2024], 10.00th=[ 2245], 20.00th=[ 2638], 00:19:19.039 | 30.00th=[ 2900], 40.00th=[ 3392], 50.00th=[ 3884], 60.00th=[ 4146], 00:19:19.039 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:19:19.039 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5932], 99.95th=[ 6063], 00:19:19.039 | 99.99th=[ 6521] 00:19:19.039 bw ( KiB/s): min=15006, max=18064, per=25.85%, avg=17240.67, stdev=1096.75, samples=9 00:19:19.039 iops : min= 1875, max= 2258, avg=2155.00, stdev=137.28, samples=9 00:19:19.039 lat (msec) : 2=3.92%, 4=50.17%, 10=45.91% 00:19:19.039 cpu : usr=91.76%, sys=7.30%, ctx=5, majf=0, minf=0 00:19:19.039 IO depths : 1=0.1%, 2=6.2%, 4=60.4%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.039 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.039 issued rwts: total=10837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.039 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:19.039 filename0: (groupid=0, jobs=1): err= 0: pid=75253: Sun Nov 17 09:09:55 2024 00:19:19.039 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:19:19.039 slat (nsec): min=6873, max=54028, avg=11479.75, stdev=5300.79 00:19:19.039 clat (usec): min=724, max=7807, avg=4304.21, stdev=1012.61 00:19:19.039 lat (usec): min=732, max=7822, avg=4315.69, stdev=1012.24 00:19:19.039 clat percentiles (usec): 00:19:19.039 | 1.00th=[ 1123], 5.00th=[ 1254], 10.00th=[ 3458], 20.00th=[ 3785], 00:19:19.039 | 30.00th=[ 3982], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:19:19.039 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:19:19.039 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7504], 99.95th=[ 7570], 00:19:19.039 | 99.99th=[ 7832] 00:19:19.039 bw ( KiB/s): min=12928, max=21563, per=22.32%, avg=14886.56, stdev=2795.57, samples=9 00:19:19.039 iops : min= 1616, max= 2695, avg=1860.78, stdev=349.33, samples=9 00:19:19.039 lat (usec) : 750=0.04%, 1000=0.08% 00:19:19.039 lat (msec) : 2=6.60%, 4=23.85%, 10=69.44% 00:19:19.039 cpu : usr=91.78%, sys=7.44%, ctx=8, majf=0, minf=0 00:19:19.039 IO depths : 1=0.1%, 2=19.5%, 4=52.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.039 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.039 issued rwts: total=9201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.039 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:19.039 filename1: (groupid=0, jobs=1): err= 0: pid=75254: Sun Nov 17 09:09:55 2024 00:19:19.039 read: IOPS=2165, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5001msec) 00:19:19.039 slat (usec): min=7, max=202, avg=15.58, stdev= 6.05 00:19:19.039 clat (usec): min=1339, max=6815, avg=3650.73, stdev=968.24 00:19:19.039 lat (usec): min=1348, max=6832, avg=3666.31, stdev=967.90 00:19:19.039 clat percentiles (usec): 00:19:19.039 | 1.00th=[ 1909], 5.00th=[ 2024], 10.00th=[ 2245], 20.00th=[ 2638], 00:19:19.039 | 30.00th=[ 2900], 40.00th=[ 3425], 50.00th=[ 3884], 60.00th=[ 4146], 00:19:19.039 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:19:19.039 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 6063], 99.95th=[ 6128], 00:19:19.039 | 99.99th=[ 6521] 00:19:19.039 bw ( KiB/s): min=15006, max=18064, per=25.83%, avg=17226.44, stdev=1113.68, samples=9 00:19:19.039 iops : min= 1875, max= 2258, avg=2153.22, stdev=139.40, samples=9 00:19:19.039 lat (msec) : 2=4.09%, 4=49.93%, 10=45.98% 00:19:19.040 cpu : usr=91.64%, sys=7.04%, ctx=94, majf=0, minf=9 00:19:19.040 IO depths : 1=0.1%, 2=6.2%, 4=60.4%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.040 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.040 issued rwts: total=10829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:19.040 filename1: (groupid=0, jobs=1): err= 0: pid=75255: Sun Nov 17 09:09:55 2024 00:19:19.040 read: IOPS=2166, BW=16.9MiB/s (17.7MB/s)(84.7MiB/5003msec) 00:19:19.040 slat (nsec): min=6463, max=66889, avg=12597.44, stdev=5294.53 00:19:19.040 clat (usec): min=1304, max=6777, avg=3656.90, stdev=981.87 00:19:19.040 lat (usec): min=1318, max=6793, avg=3669.50, stdev=982.09 00:19:19.040 clat percentiles (usec): 00:19:19.040 | 1.00th=[ 1893], 5.00th=[ 1991], 10.00th=[ 2114], 20.00th=[ 2638], 00:19:19.040 | 30.00th=[ 2933], 40.00th=[ 3392], 50.00th=[ 3884], 60.00th=[ 4146], 00:19:19.040 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:19:19.040 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5932], 99.95th=[ 6063], 00:19:19.040 | 99.99th=[ 6521] 00:19:19.040 bw ( KiB/s): min=14992, max=18112, per=25.87%, avg=17251.56, stdev=1082.66, samples=9 00:19:19.040 iops : min= 1874, max= 2264, avg=2156.44, stdev=135.33, samples=9 00:19:19.040 lat (msec) : 2=5.52%, 4=48.17%, 10=46.31% 00:19:19.040 cpu : usr=91.70%, sys=7.34%, ctx=17, majf=0, minf=0 00:19:19.040 IO depths : 1=0.1%, 2=6.3%, 4=60.3%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.040 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.040 issued rwts: total=10838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:19.040 00:19:19.040 Run status group 0 (all jobs): 00:19:19.040 READ: bw=65.1MiB/s (68.3MB/s), 14.4MiB/s-16.9MiB/s (15.1MB/s-17.7MB/s), io=326MiB (342MB), run=5001-5003msec 00:19:19.040 09:09:55 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:19.040 09:09:55 -- target/dif.sh@43 -- # local sub 00:19:19.040 09:09:55 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.040 09:09:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:19.040 09:09:55 -- target/dif.sh@36 -- # local sub_id=0 00:19:19.040 09:09:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.040 09:09:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:19.040 09:09:55 -- target/dif.sh@36 -- # local sub_id=1 00:19:19.040 09:09:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 ************************************ 00:19:19.040 END TEST fio_dif_rand_params 00:19:19.040 ************************************ 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 00:19:19.040 real 0m23.124s 00:19:19.040 user 2m4.054s 00:19:19.040 sys 0m8.113s 00:19:19.040 09:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:19.040 09:09:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:19.040 09:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 ************************************ 00:19:19.040 START TEST fio_dif_digest 00:19:19.040 ************************************ 00:19:19.040 09:09:55 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:19.040 09:09:55 -- target/dif.sh@123 -- # local NULL_DIF 00:19:19.040 09:09:55 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:19.040 09:09:55 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:19.040 09:09:55 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:19.040 09:09:55 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:19.040 09:09:55 -- target/dif.sh@127 -- # numjobs=3 00:19:19.040 09:09:55 -- target/dif.sh@127 -- # iodepth=3 00:19:19.040 09:09:55 -- target/dif.sh@127 -- # runtime=10 00:19:19.040 09:09:55 -- target/dif.sh@128 -- # hdgst=true 00:19:19.040 09:09:55 -- target/dif.sh@128 -- # ddgst=true 00:19:19.040 09:09:55 -- target/dif.sh@130 -- # create_subsystems 0 00:19:19.040 09:09:55 -- target/dif.sh@28 -- # local sub 00:19:19.040 09:09:55 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.040 09:09:55 -- target/dif.sh@31 -- # create_subsystem 0 00:19:19.040 09:09:55 -- target/dif.sh@18 -- # local sub_id=0 00:19:19.040 09:09:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 bdev_null0 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.040 09:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.040 09:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:19.040 [2024-11-17 09:09:55.848590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.040 09:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.040 09:09:55 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:19.040 09:09:55 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:19.040 09:09:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:19.040 09:09:55 -- nvmf/common.sh@520 -- # config=() 00:19:19.040 09:09:55 -- nvmf/common.sh@520 -- # local subsystem config 00:19:19.040 09:09:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.040 09:09:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.040 { 00:19:19.040 "params": { 00:19:19.040 "name": "Nvme$subsystem", 00:19:19.040 "trtype": "$TEST_TRANSPORT", 00:19:19.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.040 "adrfam": "ipv4", 00:19:19.040 "trsvcid": "$NVMF_PORT", 00:19:19.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.040 "hdgst": ${hdgst:-false}, 00:19:19.040 "ddgst": ${ddgst:-false} 00:19:19.040 }, 00:19:19.040 "method": "bdev_nvme_attach_controller" 00:19:19.040 } 00:19:19.040 EOF 00:19:19.040 )") 00:19:19.040 09:09:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.040 09:09:55 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.040 09:09:55 -- target/dif.sh@82 -- # gen_fio_conf 00:19:19.040 09:09:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:19.040 09:09:55 -- target/dif.sh@54 -- # local file 00:19:19.040 09:09:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.040 09:09:55 -- target/dif.sh@56 -- # cat 00:19:19.040 09:09:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:19.040 09:09:55 -- nvmf/common.sh@542 -- # cat 00:19:19.040 09:09:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.040 09:09:55 -- common/autotest_common.sh@1330 -- # shift 00:19:19.040 09:09:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:19.040 09:09:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.040 09:09:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:19.040 09:09:55 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.040 09:09:55 -- nvmf/common.sh@544 -- # jq . 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:19.040 09:09:55 -- nvmf/common.sh@545 -- # IFS=, 00:19:19.040 09:09:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:19.040 "params": { 00:19:19.040 "name": "Nvme0", 00:19:19.040 "trtype": "tcp", 00:19:19.040 "traddr": "10.0.0.2", 00:19:19.040 "adrfam": "ipv4", 00:19:19.040 "trsvcid": "4420", 00:19:19.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.040 "hdgst": true, 00:19:19.040 "ddgst": true 00:19:19.040 }, 00:19:19.040 "method": "bdev_nvme_attach_controller" 00:19:19.040 }' 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.040 09:09:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.040 09:09:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.040 09:09:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.040 09:09:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.041 09:09:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:19.041 09:09:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.300 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:19.300 ... 00:19:19.300 fio-3.35 00:19:19.300 Starting 3 threads 00:19:19.558 [2024-11-17 09:09:56.393383] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:19.558 [2024-11-17 09:09:56.393735] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:31.772 00:19:31.772 filename0: (groupid=0, jobs=1): err= 0: pid=75361: Sun Nov 17 09:10:06 2024 00:19:31.772 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10005msec) 00:19:31.772 slat (usec): min=6, max=175, avg=10.53, stdev= 5.87 00:19:31.772 clat (usec): min=4498, max=15095, avg=12916.21, stdev=673.10 00:19:31.772 lat (usec): min=4505, max=15120, avg=12926.74, stdev=673.60 00:19:31.772 clat percentiles (usec): 00:19:31.772 | 1.00th=[11863], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:19:31.772 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:19:31.772 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:19:31.772 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:19:31.772 | 99.99th=[15139] 00:19:31.772 bw ( KiB/s): min=28416, max=31488, per=33.32%, avg=29628.63, stdev=692.42, samples=19 00:19:31.772 iops : min= 222, max= 246, avg=231.47, stdev= 5.41, samples=19 00:19:31.772 lat (msec) : 10=0.26%, 20=99.74% 00:19:31.772 cpu : usr=91.58%, sys=7.55%, ctx=105, majf=0, minf=9 00:19:31.772 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.772 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.772 filename0: (groupid=0, jobs=1): err= 0: pid=75362: Sun Nov 17 09:10:06 2024 00:19:31.772 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10004msec) 00:19:31.772 slat (nsec): min=5283, max=71198, avg=10448.60, stdev=5071.47 00:19:31.772 clat (usec): min=11253, max=15492, avg=12932.40, stdev=597.89 00:19:31.772 lat (usec): min=11260, max=15517, avg=12942.85, stdev=598.28 00:19:31.772 clat percentiles (usec): 00:19:31.772 | 1.00th=[11863], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:19:31.772 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:19:31.772 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:19:31.772 | 99.00th=[14484], 99.50th=[14484], 99.90th=[15533], 99.95th=[15533], 00:19:31.772 | 99.99th=[15533] 00:19:31.772 bw ( KiB/s): min=28416, max=31488, per=33.35%, avg=29659.84, stdev=695.41, samples=19 00:19:31.772 iops : min= 222, max= 246, avg=231.68, stdev= 5.47, samples=19 00:19:31.772 lat (msec) : 20=100.00% 00:19:31.772 cpu : usr=91.87%, sys=7.54%, ctx=15, majf=0, minf=9 00:19:31.772 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.772 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.772 filename0: (groupid=0, jobs=1): err= 0: pid=75363: Sun Nov 17 09:10:06 2024 00:19:31.772 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10002msec) 00:19:31.772 slat (nsec): min=6962, max=49846, avg=9859.72, stdev=3984.66 00:19:31.772 clat (usec): min=11803, max=14781, avg=12931.53, stdev=580.17 00:19:31.772 lat (usec): min=11810, max=14794, avg=12941.39, stdev=580.50 00:19:31.772 clat percentiles (usec): 00:19:31.772 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:19:31.772 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:19:31.772 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:19:31.772 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14746], 99.95th=[14746], 00:19:31.772 | 99.99th=[14746] 00:19:31.772 bw ( KiB/s): min=28416, max=31488, per=33.35%, avg=29662.79, stdev=732.31, samples=19 00:19:31.772 iops : min= 222, max= 246, avg=231.68, stdev= 5.71, samples=19 00:19:31.772 lat (msec) : 20=100.00% 00:19:31.772 cpu : usr=91.98%, sys=7.46%, ctx=8, majf=0, minf=9 00:19:31.772 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.772 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.772 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.772 00:19:31.772 Run status group 0 (all jobs): 00:19:31.772 READ: bw=86.8MiB/s (91.1MB/s), 28.9MiB/s-29.0MiB/s (30.3MB/s-30.4MB/s), io=869MiB (911MB), run=10002-10005msec 00:19:31.772 09:10:06 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:31.772 09:10:06 -- target/dif.sh@43 -- # local sub 00:19:31.772 09:10:06 -- target/dif.sh@45 -- # for sub in "$@" 00:19:31.772 09:10:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:31.772 09:10:06 -- target/dif.sh@36 -- # local sub_id=0 00:19:31.772 09:10:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:31.772 09:10:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.772 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.772 09:10:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.772 09:10:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:31.772 09:10:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.772 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.772 ************************************ 00:19:31.772 END TEST fio_dif_digest 00:19:31.772 ************************************ 00:19:31.772 09:10:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.772 00:19:31.772 real 0m10.892s 00:19:31.772 user 0m28.140s 00:19:31.772 sys 0m2.472s 00:19:31.772 09:10:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:31.772 09:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:31.772 09:10:06 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:31.772 09:10:06 -- target/dif.sh@147 -- # nvmftestfini 00:19:31.772 09:10:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.772 09:10:06 -- nvmf/common.sh@116 -- # sync 00:19:31.772 09:10:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:31.772 09:10:06 -- nvmf/common.sh@119 -- # set +e 00:19:31.772 09:10:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.772 09:10:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:31.772 rmmod nvme_tcp 00:19:31.772 rmmod nvme_fabrics 00:19:31.772 rmmod nvme_keyring 00:19:31.772 09:10:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.772 09:10:06 -- nvmf/common.sh@123 -- # set -e 00:19:31.772 09:10:06 -- nvmf/common.sh@124 -- # return 0 00:19:31.772 09:10:06 -- nvmf/common.sh@477 -- # '[' -n 74591 ']' 00:19:31.772 09:10:06 -- nvmf/common.sh@478 -- # killprocess 74591 00:19:31.772 09:10:06 -- common/autotest_common.sh@936 -- # '[' -z 74591 ']' 00:19:31.772 09:10:06 -- common/autotest_common.sh@940 -- # kill -0 74591 00:19:31.772 09:10:06 -- common/autotest_common.sh@941 -- # uname 00:19:31.772 09:10:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:31.772 09:10:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74591 00:19:31.772 09:10:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:31.772 09:10:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:31.772 09:10:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74591' 00:19:31.772 killing process with pid 74591 00:19:31.773 09:10:06 -- common/autotest_common.sh@955 -- # kill 74591 00:19:31.773 09:10:06 -- common/autotest_common.sh@960 -- # wait 74591 00:19:31.773 09:10:07 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:31.773 09:10:07 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:31.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:31.773 Waiting for block devices as requested 00:19:31.773 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:31.773 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:31.773 09:10:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.773 09:10:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.773 09:10:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.773 09:10:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:31.773 09:10:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.773 09:10:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:31.773 00:19:31.773 real 0m59.110s 00:19:31.773 user 3m47.487s 00:19:31.773 sys 0m18.786s 00:19:31.773 ************************************ 00:19:31.773 END TEST nvmf_dif 00:19:31.773 ************************************ 00:19:31.773 09:10:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:31.773 09:10:07 -- common/autotest_common.sh@10 -- # set +x 00:19:31.773 09:10:07 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:31.773 09:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:31.773 09:10:07 -- common/autotest_common.sh@10 -- # set +x 00:19:31.773 ************************************ 00:19:31.773 START TEST nvmf_abort_qd_sizes 00:19:31.773 ************************************ 00:19:31.773 09:10:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:31.773 * Looking for test storage... 00:19:31.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:31.773 09:10:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:31.773 09:10:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:31.773 09:10:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:31.773 09:10:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:31.773 09:10:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:31.773 09:10:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:31.773 09:10:07 -- scripts/common.sh@335 -- # IFS=.-: 00:19:31.773 09:10:07 -- scripts/common.sh@335 -- # read -ra ver1 00:19:31.773 09:10:07 -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.773 09:10:07 -- scripts/common.sh@336 -- # read -ra ver2 00:19:31.773 09:10:07 -- scripts/common.sh@337 -- # local 'op=<' 00:19:31.773 09:10:07 -- scripts/common.sh@339 -- # ver1_l=2 00:19:31.773 09:10:07 -- scripts/common.sh@340 -- # ver2_l=1 00:19:31.773 09:10:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:31.773 09:10:07 -- scripts/common.sh@343 -- # case "$op" in 00:19:31.773 09:10:07 -- scripts/common.sh@344 -- # : 1 00:19:31.773 09:10:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:31.773 09:10:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.773 09:10:07 -- scripts/common.sh@364 -- # decimal 1 00:19:31.773 09:10:07 -- scripts/common.sh@352 -- # local d=1 00:19:31.773 09:10:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.773 09:10:07 -- scripts/common.sh@354 -- # echo 1 00:19:31.773 09:10:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:31.773 09:10:07 -- scripts/common.sh@365 -- # decimal 2 00:19:31.773 09:10:07 -- scripts/common.sh@352 -- # local d=2 00:19:31.773 09:10:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.773 09:10:07 -- scripts/common.sh@354 -- # echo 2 00:19:31.773 09:10:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:31.773 09:10:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:31.773 09:10:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:31.773 09:10:07 -- scripts/common.sh@367 -- # return 0 00:19:31.773 09:10:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:31.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.773 --rc genhtml_branch_coverage=1 00:19:31.773 --rc genhtml_function_coverage=1 00:19:31.773 --rc genhtml_legend=1 00:19:31.773 --rc geninfo_all_blocks=1 00:19:31.773 --rc geninfo_unexecuted_blocks=1 00:19:31.773 00:19:31.773 ' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:31.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.773 --rc genhtml_branch_coverage=1 00:19:31.773 --rc genhtml_function_coverage=1 00:19:31.773 --rc genhtml_legend=1 00:19:31.773 --rc geninfo_all_blocks=1 00:19:31.773 --rc geninfo_unexecuted_blocks=1 00:19:31.773 00:19:31.773 ' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:31.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.773 --rc genhtml_branch_coverage=1 00:19:31.773 --rc genhtml_function_coverage=1 00:19:31.773 --rc genhtml_legend=1 00:19:31.773 --rc geninfo_all_blocks=1 00:19:31.773 --rc geninfo_unexecuted_blocks=1 00:19:31.773 00:19:31.773 ' 00:19:31.773 09:10:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:31.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.773 --rc genhtml_branch_coverage=1 00:19:31.773 --rc genhtml_function_coverage=1 00:19:31.773 --rc genhtml_legend=1 00:19:31.773 --rc geninfo_all_blocks=1 00:19:31.773 --rc geninfo_unexecuted_blocks=1 00:19:31.773 00:19:31.773 ' 00:19:31.773 09:10:07 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.773 09:10:07 -- nvmf/common.sh@7 -- # uname -s 00:19:31.773 09:10:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.773 09:10:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.773 09:10:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.773 09:10:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.773 09:10:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.773 09:10:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.773 09:10:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.773 09:10:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.773 09:10:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.773 09:10:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:19:31.773 09:10:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=7daa6854-cf24-4684-89c5-bc50d9ffdf3c 00:19:31.773 09:10:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.773 09:10:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.773 09:10:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.773 09:10:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.773 09:10:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.773 09:10:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.773 09:10:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.773 09:10:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.773 09:10:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.773 09:10:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.773 09:10:07 -- paths/export.sh@5 -- # export PATH 00:19:31.773 09:10:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.773 09:10:07 -- nvmf/common.sh@46 -- # : 0 00:19:31.773 09:10:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:31.773 09:10:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:31.773 09:10:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:31.773 09:10:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.773 09:10:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.773 09:10:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:31.773 09:10:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:31.773 09:10:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:31.773 09:10:07 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:31.773 09:10:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:31.773 09:10:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.773 09:10:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:31.773 09:10:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:31.773 09:10:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:31.773 09:10:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.773 09:10:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:31.773 09:10:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.773 09:10:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:31.773 09:10:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:31.774 09:10:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:31.774 09:10:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:31.774 09:10:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.774 09:10:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.774 09:10:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:31.774 09:10:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:31.774 09:10:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.774 09:10:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.774 09:10:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.774 09:10:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.774 09:10:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.774 09:10:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.774 09:10:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.774 09:10:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.774 09:10:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:31.774 09:10:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:31.774 Cannot find device "nvmf_tgt_br" 00:19:31.774 09:10:07 -- nvmf/common.sh@154 -- # true 00:19:31.774 09:10:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.774 Cannot find device "nvmf_tgt_br2" 00:19:31.774 09:10:07 -- nvmf/common.sh@155 -- # true 00:19:31.774 09:10:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:31.774 09:10:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:31.774 Cannot find device "nvmf_tgt_br" 00:19:31.774 09:10:07 -- nvmf/common.sh@157 -- # true 00:19:31.774 09:10:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:31.774 Cannot find device "nvmf_tgt_br2" 00:19:31.774 09:10:07 -- nvmf/common.sh@158 -- # true 00:19:31.774 09:10:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:31.774 09:10:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:31.774 09:10:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.774 09:10:08 -- nvmf/common.sh@161 -- # true 00:19:31.774 09:10:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.774 09:10:08 -- nvmf/common.sh@162 -- # true 00:19:31.774 09:10:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.774 09:10:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.774 09:10:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.774 09:10:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.774 09:10:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.774 09:10:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.774 09:10:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.774 09:10:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:31.774 09:10:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:31.774 09:10:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:31.774 09:10:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:31.774 09:10:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:31.774 09:10:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:31.774 09:10:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.774 09:10:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.774 09:10:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.774 09:10:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:31.774 09:10:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:31.774 09:10:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.774 09:10:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.774 09:10:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.774 09:10:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.774 09:10:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.774 09:10:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:31.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:31.774 00:19:31.774 --- 10.0.0.2 ping statistics --- 00:19:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.774 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:31.774 09:10:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:31.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:31.774 00:19:31.774 --- 10.0.0.3 ping statistics --- 00:19:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.774 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:31.774 09:10:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:19:31.774 00:19:31.774 --- 10.0.0.1 ping statistics --- 00:19:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.774 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:31.774 09:10:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.774 09:10:08 -- nvmf/common.sh@421 -- # return 0 00:19:31.774 09:10:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:31.774 09:10:08 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:32.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:32.033 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.292 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.292 09:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.292 09:10:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:32.292 09:10:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:32.292 09:10:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.292 09:10:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:32.292 09:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:32.292 09:10:09 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:32.292 09:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:32.292 09:10:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.292 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:19:32.292 09:10:09 -- nvmf/common.sh@469 -- # nvmfpid=75963 00:19:32.292 09:10:09 -- nvmf/common.sh@470 -- # waitforlisten 75963 00:19:32.292 09:10:09 -- common/autotest_common.sh@829 -- # '[' -z 75963 ']' 00:19:32.292 09:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.292 09:10:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:32.292 09:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.292 09:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.292 09:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.292 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:19:32.292 [2024-11-17 09:10:09.174220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:32.292 [2024-11-17 09:10:09.174331] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.550 [2024-11-17 09:10:09.318148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.550 [2024-11-17 09:10:09.387823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:32.550 [2024-11-17 09:10:09.387988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.550 [2024-11-17 09:10:09.388005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.550 [2024-11-17 09:10:09.388015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.550 [2024-11-17 09:10:09.388187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.550 [2024-11-17 09:10:09.388580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.550 [2024-11-17 09:10:09.388855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.550 [2024-11-17 09:10:09.388889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.487 09:10:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.488 09:10:10 -- common/autotest_common.sh@862 -- # return 0 00:19:33.488 09:10:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:33.488 09:10:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 09:10:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:33.488 09:10:10 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:33.488 09:10:10 -- scripts/common.sh@312 -- # local nvmes 00:19:33.488 09:10:10 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:33.488 09:10:10 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:33.488 09:10:10 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:33.488 09:10:10 -- scripts/common.sh@297 -- # local bdf= 00:19:33.488 09:10:10 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:33.488 09:10:10 -- scripts/common.sh@232 -- # local class 00:19:33.488 09:10:10 -- scripts/common.sh@233 -- # local subclass 00:19:33.488 09:10:10 -- scripts/common.sh@234 -- # local progif 00:19:33.488 09:10:10 -- scripts/common.sh@235 -- # printf %02x 1 00:19:33.488 09:10:10 -- scripts/common.sh@235 -- # class=01 00:19:33.488 09:10:10 -- scripts/common.sh@236 -- # printf %02x 8 00:19:33.488 09:10:10 -- scripts/common.sh@236 -- # subclass=08 00:19:33.488 09:10:10 -- scripts/common.sh@237 -- # printf %02x 2 00:19:33.488 09:10:10 -- scripts/common.sh@237 -- # progif=02 00:19:33.488 09:10:10 -- scripts/common.sh@239 -- # hash lspci 00:19:33.488 09:10:10 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:33.488 09:10:10 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:33.488 09:10:10 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:33.488 09:10:10 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:33.488 09:10:10 -- scripts/common.sh@244 -- # tr -d '"' 00:19:33.488 09:10:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:33.488 09:10:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:33.488 09:10:10 -- scripts/common.sh@15 -- # local i 00:19:33.488 09:10:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:33.488 09:10:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:33.488 09:10:10 -- scripts/common.sh@24 -- # return 0 00:19:33.488 09:10:10 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:33.488 09:10:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:33.488 09:10:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:33.488 09:10:10 -- scripts/common.sh@15 -- # local i 00:19:33.488 09:10:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:33.488 09:10:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:33.488 09:10:10 -- scripts/common.sh@24 -- # return 0 00:19:33.488 09:10:10 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:33.488 09:10:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:33.488 09:10:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:33.488 09:10:10 -- scripts/common.sh@322 -- # uname -s 00:19:33.488 09:10:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:33.488 09:10:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:33.488 09:10:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:33.488 09:10:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:33.488 09:10:10 -- scripts/common.sh@322 -- # uname -s 00:19:33.488 09:10:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:33.488 09:10:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:33.488 09:10:10 -- scripts/common.sh@327 -- # (( 2 )) 00:19:33.488 09:10:10 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:33.488 09:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:33.488 09:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 ************************************ 00:19:33.488 START TEST spdk_target_abort 00:19:33.488 ************************************ 00:19:33.488 09:10:10 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:33.488 09:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 spdk_targetn1 00:19:33.488 09:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:33.488 09:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 [2024-11-17 09:10:10.355743] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.488 09:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:33.488 09:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 09:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:33.488 09:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 09:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:33.488 09:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.488 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:33.488 [2024-11-17 09:10:10.383910] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.488 09:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:33.488 09:10:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:36.802 Initializing NVMe Controllers 00:19:36.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:36.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:36.802 Initialization complete. Launching workers. 00:19:36.802 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11085, failed: 0 00:19:36.802 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1057, failed to submit 10028 00:19:36.802 success 852, unsuccess 205, failed 0 00:19:36.802 09:10:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:36.802 09:10:13 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:40.090 Initializing NVMe Controllers 00:19:40.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:40.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:40.090 Initialization complete. Launching workers. 00:19:40.090 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8976, failed: 0 00:19:40.090 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1179, failed to submit 7797 00:19:40.090 success 386, unsuccess 793, failed 0 00:19:40.090 09:10:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:40.090 09:10:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:43.376 Initializing NVMe Controllers 00:19:43.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:43.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:43.376 Initialization complete. Launching workers. 00:19:43.376 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31292, failed: 0 00:19:43.376 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2299, failed to submit 28993 00:19:43.376 success 488, unsuccess 1811, failed 0 00:19:43.376 09:10:20 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:43.376 09:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.376 09:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.376 09:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.376 09:10:20 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:43.376 09:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.376 09:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.636 09:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.636 09:10:20 -- target/abort_qd_sizes.sh@62 -- # killprocess 75963 00:19:43.636 09:10:20 -- common/autotest_common.sh@936 -- # '[' -z 75963 ']' 00:19:43.636 09:10:20 -- common/autotest_common.sh@940 -- # kill -0 75963 00:19:43.636 09:10:20 -- common/autotest_common.sh@941 -- # uname 00:19:43.636 09:10:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.636 09:10:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75963 00:19:43.636 killing process with pid 75963 00:19:43.636 09:10:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:43.636 09:10:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:43.636 09:10:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75963' 00:19:43.636 09:10:20 -- common/autotest_common.sh@955 -- # kill 75963 00:19:43.636 09:10:20 -- common/autotest_common.sh@960 -- # wait 75963 00:19:43.895 00:19:43.895 real 0m10.416s 00:19:43.895 user 0m42.569s 00:19:43.895 sys 0m2.047s 00:19:43.895 09:10:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:43.895 ************************************ 00:19:43.895 END TEST spdk_target_abort 00:19:43.895 ************************************ 00:19:43.895 09:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.895 09:10:20 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:43.895 09:10:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:43.895 09:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:43.895 09:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.895 ************************************ 00:19:43.895 START TEST kernel_target_abort 00:19:43.895 ************************************ 00:19:43.895 09:10:20 -- common/autotest_common.sh@1114 -- # kernel_target 00:19:43.895 09:10:20 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:43.895 09:10:20 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:43.895 09:10:20 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:43.895 09:10:20 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:43.895 09:10:20 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:43.895 09:10:20 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:43.895 09:10:20 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:43.895 09:10:20 -- nvmf/common.sh@627 -- # local block nvme 00:19:43.895 09:10:20 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:43.895 09:10:20 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:43.895 09:10:20 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:43.895 09:10:20 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:44.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.413 Waiting for block devices as requested 00:19:44.413 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.413 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.413 09:10:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:44.413 09:10:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:44.413 09:10:21 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:44.413 09:10:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:44.413 09:10:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:44.673 No valid GPT data, bailing 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # pt= 00:19:44.673 09:10:21 -- scripts/common.sh@394 -- # return 1 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:44.673 09:10:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:44.673 09:10:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:44.673 09:10:21 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:44.673 09:10:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:44.673 No valid GPT data, bailing 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # pt= 00:19:44.673 09:10:21 -- scripts/common.sh@394 -- # return 1 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:44.673 09:10:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:44.673 09:10:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:44.673 09:10:21 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:44.673 09:10:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:44.673 No valid GPT data, bailing 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # pt= 00:19:44.673 09:10:21 -- scripts/common.sh@394 -- # return 1 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:44.673 09:10:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:44.673 09:10:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:44.673 09:10:21 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:44.673 09:10:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:44.673 No valid GPT data, bailing 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:44.673 09:10:21 -- scripts/common.sh@393 -- # pt= 00:19:44.673 09:10:21 -- scripts/common.sh@394 -- # return 1 00:19:44.673 09:10:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:44.673 09:10:21 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:44.673 09:10:21 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:44.673 09:10:21 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:44.673 09:10:21 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:44.673 09:10:21 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:44.673 09:10:21 -- nvmf/common.sh@654 -- # echo 1 00:19:44.673 09:10:21 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:44.673 09:10:21 -- nvmf/common.sh@656 -- # echo 1 00:19:44.673 09:10:21 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:44.673 09:10:21 -- nvmf/common.sh@663 -- # echo tcp 00:19:44.673 09:10:21 -- nvmf/common.sh@664 -- # echo 4420 00:19:44.673 09:10:21 -- nvmf/common.sh@665 -- # echo ipv4 00:19:44.673 09:10:21 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:44.932 09:10:21 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7daa6854-cf24-4684-89c5-bc50d9ffdf3c --hostid=7daa6854-cf24-4684-89c5-bc50d9ffdf3c -a 10.0.0.1 -t tcp -s 4420 00:19:44.932 00:19:44.932 Discovery Log Number of Records 2, Generation counter 2 00:19:44.932 =====Discovery Log Entry 0====== 00:19:44.932 trtype: tcp 00:19:44.932 adrfam: ipv4 00:19:44.932 subtype: current discovery subsystem 00:19:44.932 treq: not specified, sq flow control disable supported 00:19:44.932 portid: 1 00:19:44.932 trsvcid: 4420 00:19:44.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:44.932 traddr: 10.0.0.1 00:19:44.932 eflags: none 00:19:44.932 sectype: none 00:19:44.932 =====Discovery Log Entry 1====== 00:19:44.932 trtype: tcp 00:19:44.932 adrfam: ipv4 00:19:44.932 subtype: nvme subsystem 00:19:44.932 treq: not specified, sq flow control disable supported 00:19:44.932 portid: 1 00:19:44.932 trsvcid: 4420 00:19:44.932 subnqn: kernel_target 00:19:44.932 traddr: 10.0.0.1 00:19:44.932 eflags: none 00:19:44.932 sectype: none 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:44.932 09:10:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:48.219 Initializing NVMe Controllers 00:19:48.219 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:48.219 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:48.219 Initialization complete. Launching workers. 00:19:48.219 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30599, failed: 0 00:19:48.219 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30599, failed to submit 0 00:19:48.219 success 0, unsuccess 30599, failed 0 00:19:48.219 09:10:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:48.219 09:10:24 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:51.546 Initializing NVMe Controllers 00:19:51.546 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:51.546 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:51.546 Initialization complete. Launching workers. 00:19:51.546 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64888, failed: 0 00:19:51.546 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27303, failed to submit 37585 00:19:51.546 success 0, unsuccess 27303, failed 0 00:19:51.546 09:10:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:51.546 09:10:27 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:54.847 Initializing NVMe Controllers 00:19:54.848 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:54.848 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:54.848 Initialization complete. Launching workers. 00:19:54.848 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76671, failed: 0 00:19:54.848 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19130, failed to submit 57541 00:19:54.848 success 0, unsuccess 19130, failed 0 00:19:54.848 09:10:31 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:54.848 09:10:31 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:54.848 09:10:31 -- nvmf/common.sh@677 -- # echo 0 00:19:54.848 09:10:31 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:54.848 09:10:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:54.848 09:10:31 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:54.848 09:10:31 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:54.848 09:10:31 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:54.848 09:10:31 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:54.848 00:19:54.848 real 0m10.471s 00:19:54.848 user 0m5.326s 00:19:54.848 sys 0m2.577s 00:19:54.848 09:10:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:54.848 09:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.848 ************************************ 00:19:54.848 END TEST kernel_target_abort 00:19:54.848 ************************************ 00:19:54.848 09:10:31 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:54.848 09:10:31 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:54.848 09:10:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:54.848 09:10:31 -- nvmf/common.sh@116 -- # sync 00:19:54.848 09:10:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:54.848 09:10:31 -- nvmf/common.sh@119 -- # set +e 00:19:54.848 09:10:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:54.848 09:10:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:54.848 rmmod nvme_tcp 00:19:54.848 rmmod nvme_fabrics 00:19:54.848 rmmod nvme_keyring 00:19:54.848 09:10:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:54.848 09:10:31 -- nvmf/common.sh@123 -- # set -e 00:19:54.848 09:10:31 -- nvmf/common.sh@124 -- # return 0 00:19:54.848 09:10:31 -- nvmf/common.sh@477 -- # '[' -n 75963 ']' 00:19:54.848 09:10:31 -- nvmf/common.sh@478 -- # killprocess 75963 00:19:54.848 09:10:31 -- common/autotest_common.sh@936 -- # '[' -z 75963 ']' 00:19:54.848 09:10:31 -- common/autotest_common.sh@940 -- # kill -0 75963 00:19:54.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75963) - No such process 00:19:54.848 Process with pid 75963 is not found 00:19:54.848 09:10:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75963 is not found' 00:19:54.848 09:10:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:54.848 09:10:31 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:55.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.365 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:55.365 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:55.365 09:10:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.365 09:10:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.365 09:10:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.365 09:10:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.365 09:10:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.365 09:10:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:55.365 09:10:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.365 09:10:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:55.365 00:19:55.365 real 0m24.442s 00:19:55.365 user 0m49.399s 00:19:55.365 sys 0m5.892s 00:19:55.365 09:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:55.365 09:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.365 ************************************ 00:19:55.365 END TEST nvmf_abort_qd_sizes 00:19:55.365 ************************************ 00:19:55.365 09:10:32 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:55.365 09:10:32 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:19:55.365 09:10:32 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:19:55.365 09:10:32 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:19:55.365 09:10:32 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:19:55.365 09:10:32 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:19:55.365 09:10:32 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:19:55.366 09:10:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.366 09:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.366 09:10:32 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:19:55.366 09:10:32 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:19:55.366 09:10:32 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:19:55.366 09:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:57.271 INFO: APP EXITING 00:19:57.271 INFO: killing all VMs 00:19:57.271 INFO: killing vhost app 00:19:57.271 INFO: EXIT DONE 00:19:57.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:57.840 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:57.840 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:58.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.408 Cleaning 00:19:58.408 Removing: /var/run/dpdk/spdk0/config 00:19:58.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:58.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:58.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:58.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:58.409 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:58.409 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:58.409 Removing: /var/run/dpdk/spdk1/config 00:19:58.409 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:58.409 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:58.409 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:58.409 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:58.409 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:58.409 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:58.409 Removing: /var/run/dpdk/spdk2/config 00:19:58.668 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:58.668 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:58.668 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:58.668 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:58.668 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:58.668 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:58.668 Removing: /var/run/dpdk/spdk3/config 00:19:58.668 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:58.668 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:58.668 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:58.668 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:58.668 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:58.669 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:58.669 Removing: /var/run/dpdk/spdk4/config 00:19:58.669 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:58.669 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:58.669 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:58.669 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:58.669 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:58.669 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:58.669 Removing: /dev/shm/nvmf_trace.0 00:19:58.669 Removing: /dev/shm/spdk_tgt_trace.pid53814 00:19:58.669 Removing: /var/run/dpdk/spdk0 00:19:58.669 Removing: /var/run/dpdk/spdk1 00:19:58.669 Removing: /var/run/dpdk/spdk2 00:19:58.669 Removing: /var/run/dpdk/spdk3 00:19:58.669 Removing: /var/run/dpdk/spdk4 00:19:58.669 Removing: /var/run/dpdk/spdk_pid53667 00:19:58.669 Removing: /var/run/dpdk/spdk_pid53814 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54067 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54262 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54405 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54482 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54565 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54663 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54742 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54780 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54810 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54884 00:19:58.669 Removing: /var/run/dpdk/spdk_pid54965 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55410 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55457 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55508 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55524 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55585 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55601 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55663 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55679 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55730 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55748 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55788 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55806 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55930 00:19:58.669 Removing: /var/run/dpdk/spdk_pid55960 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56047 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56093 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56123 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56176 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56201 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56230 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56244 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56283 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56298 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56327 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56347 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56381 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56400 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56430 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56449 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56484 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56498 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56532 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56552 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56581 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56600 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56635 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56649 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56683 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56703 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56732 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56756 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56786 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56800 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56839 00:19:58.669 Removing: /var/run/dpdk/spdk_pid56854 00:19:58.928 Removing: /var/run/dpdk/spdk_pid56883 00:19:58.928 Removing: /var/run/dpdk/spdk_pid56907 00:19:58.928 Removing: /var/run/dpdk/spdk_pid56937 00:19:58.929 Removing: /var/run/dpdk/spdk_pid56951 00:19:58.929 Removing: /var/run/dpdk/spdk_pid56990 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57008 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57046 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57063 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57102 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57122 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57152 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57170 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57206 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57283 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57371 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57706 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57718 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57749 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57761 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57775 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57793 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57811 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57819 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57837 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57855 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57863 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57881 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57899 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57907 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57926 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57944 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57952 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57970 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57988 00:19:58.929 Removing: /var/run/dpdk/spdk_pid57996 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58031 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58038 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58071 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58137 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58163 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58173 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58201 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58211 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58218 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58259 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58270 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58297 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58304 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58312 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58319 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58327 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58329 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58336 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58344 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58370 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58397 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58406 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58435 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58444 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58452 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58492 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58504 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58530 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58538 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58540 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58553 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58555 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58568 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58570 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58583 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58653 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58695 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58801 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58840 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58877 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58891 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58911 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58926 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58955 00:19:58.929 Removing: /var/run/dpdk/spdk_pid58970 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59046 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59060 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59103 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59183 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59228 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59260 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59353 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59399 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59425 00:19:58.929 Removing: /var/run/dpdk/spdk_pid59654 00:19:59.188 Removing: /var/run/dpdk/spdk_pid59746 00:19:59.189 Removing: /var/run/dpdk/spdk_pid59779 00:19:59.189 Removing: /var/run/dpdk/spdk_pid60106 00:19:59.189 Removing: /var/run/dpdk/spdk_pid60144 00:19:59.189 Removing: /var/run/dpdk/spdk_pid60460 00:19:59.189 Removing: /var/run/dpdk/spdk_pid60873 00:19:59.189 Removing: /var/run/dpdk/spdk_pid61142 00:19:59.189 Removing: /var/run/dpdk/spdk_pid61925 00:19:59.189 Removing: /var/run/dpdk/spdk_pid62754 00:19:59.189 Removing: /var/run/dpdk/spdk_pid62877 00:19:59.189 Removing: /var/run/dpdk/spdk_pid62939 00:19:59.189 Removing: /var/run/dpdk/spdk_pid64218 00:19:59.189 Removing: /var/run/dpdk/spdk_pid64440 00:19:59.189 Removing: /var/run/dpdk/spdk_pid64761 00:19:59.189 Removing: /var/run/dpdk/spdk_pid64871 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65004 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65032 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65058 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65087 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65171 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65305 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65455 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65530 00:19:59.189 Removing: /var/run/dpdk/spdk_pid65932 00:19:59.189 Removing: /var/run/dpdk/spdk_pid66289 00:19:59.189 Removing: /var/run/dpdk/spdk_pid66291 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68536 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68543 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68831 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68845 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68865 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68890 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68906 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68985 00:19:59.189 Removing: /var/run/dpdk/spdk_pid68992 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69100 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69102 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69216 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69218 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69626 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69675 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69779 00:19:59.189 Removing: /var/run/dpdk/spdk_pid69863 00:19:59.189 Removing: /var/run/dpdk/spdk_pid70181 00:19:59.189 Removing: /var/run/dpdk/spdk_pid70379 00:19:59.189 Removing: /var/run/dpdk/spdk_pid70767 00:19:59.189 Removing: /var/run/dpdk/spdk_pid71300 00:19:59.189 Removing: /var/run/dpdk/spdk_pid71746 00:19:59.189 Removing: /var/run/dpdk/spdk_pid71799 00:19:59.189 Removing: /var/run/dpdk/spdk_pid71846 00:19:59.189 Removing: /var/run/dpdk/spdk_pid71900 00:19:59.189 Removing: /var/run/dpdk/spdk_pid72013 00:19:59.189 Removing: /var/run/dpdk/spdk_pid72068 00:19:59.189 Removing: /var/run/dpdk/spdk_pid72128 00:19:59.189 Removing: /var/run/dpdk/spdk_pid72188 00:19:59.189 Removing: /var/run/dpdk/spdk_pid72517 00:19:59.189 Removing: /var/run/dpdk/spdk_pid73700 00:19:59.189 Removing: /var/run/dpdk/spdk_pid73841 00:19:59.189 Removing: /var/run/dpdk/spdk_pid74083 00:19:59.189 Removing: /var/run/dpdk/spdk_pid74648 00:19:59.189 Removing: /var/run/dpdk/spdk_pid74813 00:19:59.189 Removing: /var/run/dpdk/spdk_pid74970 00:19:59.189 Removing: /var/run/dpdk/spdk_pid75068 00:19:59.189 Removing: /var/run/dpdk/spdk_pid75242 00:19:59.189 Removing: /var/run/dpdk/spdk_pid75351 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76020 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76055 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76090 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76341 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76371 00:19:59.189 Removing: /var/run/dpdk/spdk_pid76406 00:19:59.189 Clean 00:19:59.448 killing process with pid 48048 00:19:59.448 killing process with pid 48051 00:19:59.448 09:10:36 -- common/autotest_common.sh@1446 -- # return 0 00:19:59.448 09:10:36 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:19:59.448 09:10:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.448 09:10:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.448 09:10:36 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:19:59.448 09:10:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.448 09:10:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.448 09:10:36 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:59.448 09:10:36 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:59.448 09:10:36 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:59.448 09:10:36 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:19:59.448 09:10:36 -- spdk/autotest.sh@383 -- # hostname 00:19:59.448 09:10:36 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:59.707 geninfo: WARNING: invalid characters removed from testname! 00:20:26.255 09:10:58 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:26.256 09:11:02 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.161 09:11:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.693 09:11:07 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:33.229 09:11:09 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:35.788 09:11:12 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:37.691 09:11:14 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:37.950 09:11:14 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:37.950 09:11:14 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:37.950 09:11:14 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:37.950 09:11:14 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:37.950 09:11:14 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:37.950 09:11:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:37.950 09:11:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:37.950 09:11:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:37.950 09:11:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:37.950 09:11:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:37.950 09:11:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:37.950 09:11:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:37.950 09:11:14 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:37.950 09:11:14 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:37.950 09:11:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:37.950 09:11:14 -- scripts/common.sh@343 -- $ case "$op" in 00:20:37.950 09:11:14 -- scripts/common.sh@344 -- $ : 1 00:20:37.950 09:11:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:37.950 09:11:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.950 09:11:14 -- scripts/common.sh@364 -- $ decimal 1 00:20:37.950 09:11:14 -- scripts/common.sh@352 -- $ local d=1 00:20:37.950 09:11:14 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:37.950 09:11:14 -- scripts/common.sh@354 -- $ echo 1 00:20:37.950 09:11:14 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:37.950 09:11:14 -- scripts/common.sh@365 -- $ decimal 2 00:20:37.950 09:11:14 -- scripts/common.sh@352 -- $ local d=2 00:20:37.950 09:11:14 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:37.950 09:11:14 -- scripts/common.sh@354 -- $ echo 2 00:20:37.950 09:11:14 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:37.950 09:11:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:37.950 09:11:14 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:37.950 09:11:14 -- scripts/common.sh@367 -- $ return 0 00:20:37.950 09:11:14 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.950 09:11:14 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:37.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.950 --rc genhtml_branch_coverage=1 00:20:37.950 --rc genhtml_function_coverage=1 00:20:37.951 --rc genhtml_legend=1 00:20:37.951 --rc geninfo_all_blocks=1 00:20:37.951 --rc geninfo_unexecuted_blocks=1 00:20:37.951 00:20:37.951 ' 00:20:37.951 09:11:14 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:37.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.951 --rc genhtml_branch_coverage=1 00:20:37.951 --rc genhtml_function_coverage=1 00:20:37.951 --rc genhtml_legend=1 00:20:37.951 --rc geninfo_all_blocks=1 00:20:37.951 --rc geninfo_unexecuted_blocks=1 00:20:37.951 00:20:37.951 ' 00:20:37.951 09:11:14 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:37.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.951 --rc genhtml_branch_coverage=1 00:20:37.951 --rc genhtml_function_coverage=1 00:20:37.951 --rc genhtml_legend=1 00:20:37.951 --rc geninfo_all_blocks=1 00:20:37.951 --rc geninfo_unexecuted_blocks=1 00:20:37.951 00:20:37.951 ' 00:20:37.951 09:11:14 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:37.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.951 --rc genhtml_branch_coverage=1 00:20:37.951 --rc genhtml_function_coverage=1 00:20:37.951 --rc genhtml_legend=1 00:20:37.951 --rc geninfo_all_blocks=1 00:20:37.951 --rc geninfo_unexecuted_blocks=1 00:20:37.951 00:20:37.951 ' 00:20:37.951 09:11:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.951 09:11:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:37.951 09:11:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.951 09:11:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.951 09:11:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.951 09:11:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.951 09:11:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.951 09:11:14 -- paths/export.sh@5 -- $ export PATH 00:20:37.951 09:11:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.951 09:11:14 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:37.951 09:11:14 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:37.951 09:11:14 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731834674.XXXXXX 00:20:37.951 09:11:14 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731834674.PmzoTI 00:20:37.951 09:11:14 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:37.951 09:11:14 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:20:37.951 09:11:14 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:37.951 09:11:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:37.951 09:11:14 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:37.951 09:11:14 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:37.951 09:11:14 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:37.951 09:11:14 -- common/autotest_common.sh@10 -- $ set +x 00:20:37.951 09:11:14 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:37.951 09:11:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:37.951 09:11:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:37.951 09:11:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:37.951 09:11:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:37.951 09:11:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:37.951 09:11:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:37.951 09:11:14 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:37.951 09:11:14 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:37.951 09:11:14 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:37.951 09:11:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:37.951 + [[ -n 5238 ]] 00:20:37.951 + sudo kill 5238 00:20:38.219 [Pipeline] } 00:20:38.235 [Pipeline] // timeout 00:20:38.240 [Pipeline] } 00:20:38.255 [Pipeline] // stage 00:20:38.260 [Pipeline] } 00:20:38.274 [Pipeline] // catchError 00:20:38.286 [Pipeline] stage 00:20:38.288 [Pipeline] { (Stop VM) 00:20:38.301 [Pipeline] sh 00:20:38.583 + vagrant halt 00:20:41.874 ==> default: Halting domain... 00:20:48.458 [Pipeline] sh 00:20:48.742 + vagrant destroy -f 00:20:52.054 ==> default: Removing domain... 00:20:52.068 [Pipeline] sh 00:20:52.351 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:52.361 [Pipeline] } 00:20:52.374 [Pipeline] // stage 00:20:52.379 [Pipeline] } 00:20:52.392 [Pipeline] // dir 00:20:52.397 [Pipeline] } 00:20:52.411 [Pipeline] // wrap 00:20:52.417 [Pipeline] } 00:20:52.447 [Pipeline] // catchError 00:20:52.467 [Pipeline] stage 00:20:52.476 [Pipeline] { (Epilogue) 00:20:52.486 [Pipeline] sh 00:20:52.763 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:58.051 [Pipeline] catchError 00:20:58.053 [Pipeline] { 00:20:58.069 [Pipeline] sh 00:20:58.351 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:58.610 Artifacts sizes are good 00:20:58.620 [Pipeline] } 00:20:58.635 [Pipeline] // catchError 00:20:58.647 [Pipeline] archiveArtifacts 00:20:58.655 Archiving artifacts 00:20:58.771 [Pipeline] cleanWs 00:20:58.784 [WS-CLEANUP] Deleting project workspace... 00:20:58.784 [WS-CLEANUP] Deferred wipeout is used... 00:20:58.791 [WS-CLEANUP] done 00:20:58.793 [Pipeline] } 00:20:58.809 [Pipeline] // stage 00:20:58.814 [Pipeline] } 00:20:58.828 [Pipeline] // node 00:20:58.834 [Pipeline] End of Pipeline 00:20:58.877 Finished: SUCCESS